In the vast and vital realm of agriculture, the need for modernization has never been more pressing. Despite numerous technological advancements, the agricultural sector continues to rely heavily on manual labor to sort and identify various plants and weeds, a labor-intensive task that consumes time and energy. This trillion-dollar industry is ripe for transformation through the infusion of Artificial Intelligence (AI) and Deep Learning. By harnessing the power of AI, we can significantly reduce the burden of manual labor, revolutionize plant identification, and pave the way for improved crop yields and more sustainable agricultural practices.
The agricultural industry faces a fundamental challenge: the arduous task of manually sorting and identifying different plant seedlings. This time-consuming process hampers efficiency and limits the capacity for higher-order decision making in agriculture. To address this challenge, our mission is to leverage cutting-edge AI technology to automate and enhance the plant classification process. Our goal is to build a Convolutional Neural Network (CNN) model capable of classifying plant seedlings into their respective 12 categories. By doing so, we aim to streamline agricultural operations, increase productivity, and contribute to the long-term sustainability of agriculture.
We have collaborated with the esteemed Aarhus University Signal Processing group and the University of Southern Denmark to access a comprehensive dataset containing images of unique plants belonging to 12 different species. This dataset, available as "images.npy" and "Labels.csv," presents an opportunity to develop a powerful CNN model. The model will be trained to recognize and classify plant species from images, automating a task that traditionally relied on manual labor.
Careful Problem Analysis: We begin by thoroughly understanding the problem statement and its implications in the agriculture sector.
Data Access: The dataset, including images and corresponding labels, is retrieved from the Olympus platform.
Data Preprocessing: Given the large volume of data, the images have been converted into numpy arrays stored in "images.npy," while labels are organized in "Labels.csv."
Model Development: We will design and build a Convolutional Neural Network (CNN) capable of classifying plant seedlings accurately. The CNN will leverage the power of deep learning to identify plant species from images.
Computational Power: To expedite model training, we recommend using Google Colab with GPU support, ensuring faster execution, especially when dealing with complex CNN architectures.
Our solution approach aims to harness the capabilities of AI and Deep Learning to empower the agricultural industry. By automating plant classification, we anticipate improved efficiency, better crop yields, and a brighter future for sustainable agricultural practices. The fusion of technology and agriculture holds immense promise, and we are at the forefront of this transformative journey.
import pandas as pd, numpy as np, matplotlib.pyplot as plt
import seaborn as sns
from matplotlib import pyplot
%matplotlib inline
# Create features and labels
from tensorflow.keras.applications.mobilenet import preprocess_input
import cv2
import warnings
warnings.filterwarnings('ignore')
2023-10-06 13:23:58.662381: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
import numpy as np
import pandas as pd
np_load_old = np.load
# # modify the default parameters of np.load
np.load = lambda *a,**k: np_load_old(*a, allow_pickle = True, **k)
# Load image data
images = np.load('images.npy')
# Load labels
labels = pd.read_csv('Labels.csv')
# Check data dimensions
print("Image Data Shape:", images.shape)
print("Labels Data Head:", labels.head())
Image Data Shape: (4750, 128, 128, 3) Labels Data Head: Label 0 Small-flowered Cranesbill 1 Small-flowered Cranesbill 2 Small-flowered Cranesbill 3 Small-flowered Cranesbill 4 Small-flowered Cranesbill
labels.head()
| Label | |
|---|---|
| 0 | Small-flowered Cranesbill |
| 1 | Small-flowered Cranesbill |
| 2 | Small-flowered Cranesbill |
| 3 | Small-flowered Cranesbill |
| 4 | Small-flowered Cranesbill |
import matplotlib.pyplot as plt
import seaborn as sns
# Visualize sample images
plt.figure(figsize=(10, 5))
for i in range(5):
plt.subplot(1, 5, i+1)
plt.imshow(images[i])
plt.title(labels['Label'][i], fontsize=8) # Adjust the fontsize as needed
plt.axis('off')
# Visualize label distribution
plt.figure(figsize=(10, 5))
sns.countplot(data=labels, x='Label')
plt.xticks(rotation=90)
plt.title("Label Distribution")
plt.show()
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
# Get unique category names
category_names = np.unique(labels['Label'])
# Create a dictionary to store the first image for each category
first_images = {category: None for category in category_names}
# Find and store the first image for each category
for category in category_names:
first_image_index = labels.index[labels['Label'] == category][0]
first_images[category] = images[first_image_index]
# Visualize one image per category
plt.figure(figsize=(12, 6))
for i, category in enumerate(category_names):
plt.subplot(2, 6, i + 1) # Adjust the subplot layout as needed
plt.imshow(first_images[category], cmap='gray')
plt.title(category, fontsize=8)
plt.axis('off')
# Visualize label distribution
plt.figure(figsize=(12, 6))
sns.countplot(data=labels, x='Label')
plt.xticks(rotation=90)
plt.title("Label Distribution")
plt.show()
plt.rcParams["figure.figsize"] = (12,5)
sns.countplot(x=labels.iloc[:,-1],order = labels['Label'].value_counts().index, palette='Reds_r')
plt.xlabel('Plant Categories')
plt.xticks(rotation=45)
(array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]), [Text(0, 0, 'Loose Silky-bent'), Text(1, 0, 'Common Chickweed'), Text(2, 0, 'Scentless Mayweed'), Text(3, 0, 'Small-flowered Cranesbill'), Text(4, 0, 'Fat Hen'), Text(5, 0, 'Charlock'), Text(6, 0, 'Sugar beet'), Text(7, 0, 'Cleavers'), Text(8, 0, 'Black-grass'), Text(9, 0, 'Shepherds Purse'), Text(10, 0, 'Common wheat'), Text(11, 0, 'Maize')])
# Compute image statistics
image_mean = np.mean(images)
image_median = np.median(images)
image_std = np.std(images)
# Label statistics
num_classes = labels['Label'].nunique()
class_counts = labels['Label'].value_counts()
print("Image Mean:", image_mean)
print("Image Median:", image_median)
print("Image Standard Deviation:", image_std)
print("Number of Unique Classes:", num_classes)
print("Class Counts:\n", class_counts)
Image Mean: 70.04363745545504 Image Median: 67.0 Image Standard Deviation: 31.996876308515 Number of Unique Classes: 12 Class Counts: Loose Silky-bent 654 Common Chickweed 611 Scentless Mayweed 516 Small-flowered Cranesbill 496 Fat Hen 475 Charlock 390 Sugar beet 385 Cleavers 287 Black-grass 263 Shepherds Purse 231 Common wheat 221 Maize 221 Name: Label, dtype: int64
# Check for missing values
missing_values = labels.isnull().sum()
# Data augmentation (if needed)
# You can apply techniques like rotation, scaling, and flipping to augment image data.
print("Missing Values:\n", missing_values)
Missing Values: Label 0 dtype: int64
from sklearn.decomposition import PCA
# Apply PCA to image data for feature extraction
pca = PCA(n_components=2)
image_features = pca.fit_transform(images.reshape(images.shape[0], -1))
# Get the category names from the 'species' column
category_names = labels['Label'].unique()
# Create an empty list to store legend handles and labels
legend_handles = []
# Visualize the extracted features with category labels
plt.figure(figsize=(8, 6))
for category in category_names:
# Filter data points for the current category
data_points = image_features[labels['Label'] == category]
# Create a scatter plot for the current category
scatter = plt.scatter(data_points[:, 0], data_points[:, 1], label=category)
# Append the legend handle for the current category
legend_handles.append(scatter)
# Add legend with handles and labels
plt.legend(handles=legend_handles, title="Classes")
plt.title("PCA Features")
plt.xlabel("Feature 1")
plt.ylabel("Feature 2")
plt.show()
from sklearn.decomposition import PCA
# Apply PCA to image data for feature extraction
pca = PCA(n_components=2)
image_features = pca.fit_transform(images.reshape(images.shape[0], -1))
# Get the category names from the 'species' column
category_names = labels['Label'].unique()
# Create a separate PCA visualization for each category
plt.figure(figsize=(15, 8))
# Determine the number of rows and columns for subplots
num_rows = 3 # Adjust the number of rows as needed
num_cols = len(category_names) // num_rows + 1
for i, category in enumerate(category_names):
# Filter data points for the current category
data_points = image_features[labels['Label'] == category]
# Create a scatter plot for the current category
plt.subplot(num_rows, num_cols, i + 1)
plt.scatter(data_points[:, 0], data_points[:, 1])
plt.title(category)
plt.xlabel("Feature 1")
plt.ylabel("Feature 2")
plt.tight_layout()
plt.show()
import numpy as np
import seaborn as sns
# Calculate image shapes
image_shapes = np.array([img.shape for img in images])
# Plot a histogram of image shapes
plt.figure(figsize=(8, 6))
sns.histplot(data=image_shapes, bins=20, kde=True)
plt.title("Distribution of Image Shapes")
plt.xlabel("Image Height")
plt.ylabel("Image Width")
plt.show()
# Flatten and concatenate all images into a single array
pixel_values = np.concatenate([img.flatten() for img in images])
# Plot a histogram of pixel intensities
plt.figure(figsize=(8, 6))
sns.histplot(data=pixel_values, bins=50, kde=True)
plt.title("Distribution of Pixel Intensities")
plt.xlabel("Pixel Intensity")
plt.ylabel("Frequency")
plt.show()
The distribution of label counts in the dataset suggests that it may not be well-balanced. In a well-balanced dataset, each class would have roughly the same number of samples. However, in this dataset:
"Loose Silky-bent" and "Common Chickweed" have a significantly higher number of samples compared to the other classes, with 654 and 611 samples, respectively.
Some classes, such as "Black-grass," "Shepherds Purse," "Common wheat," and "Maize," have notably lower sample counts, with 263, 231, 221, and 221 samples, respectively.
The remaining classes fall in between these extremes.
This imbalance in label counts could potentially impact the performance of machine learning models, especially if the model is sensitive to class imbalances. Models may perform better on classes with more samples and struggle with classes that have fewer samples.
In practice, addressing class imbalance may involve techniques such as oversampling the minority classes, undersampling the majority classes, or using more advanced methods like Synthetic Minority Over-sampling Technique (SMOTE) to generate synthetic samples for minority classes.
There is a lot of noise not relevant to the plants themselvs, so we should apply a mask to the images to cut out the noise
categ = ['Black-grass', 'Charlock', 'Cleavers', 'Common Chickweed', 'Common wheat', 'Fat Hen', 'Loose Silky-bent',
'Maize', 'Scentless Mayweed', 'Shepherds Purse', 'Small-flowered Cranesbill', 'Sugar beet']
num_categ = len(categ)
num_categ
12
#Importing ImageGrid to plot the plant sample images
from mpl_toolkits.axes_grid1 import ImageGrid
#defining a figure of size 12X12
fig = plt.figure(1, figsize=(num_categ, num_categ))
grid = ImageGrid(fig, 111, nrows_ncols=(num_categ, num_categ), axes_pad=0.05)
i = 0
index = labels.index
#Plottting 12 images from each plant category
for category_id, category in enumerate(categ):
condition = labels["Label"] == category
plant_indices = index[condition].tolist()
for j in range(0,12):
ax = grid[i]
ax.imshow(images[plant_indices[j]])
ax.axis('off')
if i % num_categ == num_categ - 1:
#printing the names for each caterogy
ax.text(200, 70, category, verticalalignment='center')
i += 1
plt.show();
#Importing ImageGrid to plot the plant sample images
from mpl_toolkits.axes_grid1 import ImageGrid
#defining a figure of size 12X12
fig = plt.figure(1, figsize=(num_categ, num_categ))
grid = ImageGrid(fig, 111, nrows_ncols=(num_categ, num_categ), axes_pad=0.05)
i = 0
index = labels.index
#Plottting 12 images from each plant category
for category_id, category in enumerate(categ):
condition = labels["Label"] == category
plant_indices = index[condition].tolist()
for j in range(0,12):
ax = grid[i]
ax.imshow(images[plant_indices[j]])
ax.axis('off')
if i % num_categ == num_categ - 1:
#printing the names for each caterogy
ax.text(200, 70, category, verticalalignment='center')
i += 1
plt.show();
import matplotlib.pyplot as plt
import seaborn as sns
# Resizing the image size to half ie., from 128X128 to 64X64
img = cv2.resize(images[1000],None,fx=0.50,fy=0.50)
#Applying Gaussian Blur
img_g = cv2.GaussianBlur(img,(3,3),0)
#Displaying preprocessed and original images
print("Resized to 50% and applied Gaussian Blurring with kernel size 3X3")
plt.imshow(img_g)
Resized to 50% and applied Gaussian Blurring with kernel size 3X3
<matplotlib.image.AxesImage at 0x7f9918b74880>
print("Original Image of size 128X128")
plt.imshow(images[1000])
Original Image of size 128X128
<matplotlib.image.AxesImage at 0x7f99288a22c0>
# Convert to HSV image
hsvImg = cv2.cvtColor(img_g, cv2.COLOR_BGR2HSV)
plt.imshow(hsvImg)
<matplotlib.image.AxesImage at 0x7f9910642440>
# Create mask (parameters - green color range)
lower_green = (25, 40, 50)
upper_green = (75, 255, 255)
mask = cv2.inRange(hsvImg, lower_green, upper_green)
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (11, 11))
mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel)
# Create bool mask
bMask = mask > 0
# Apply the mask
clearImg = np.zeros_like(img, np.uint8) # Create empty image
clearImg[bMask] = img[bMask] # Apply boolean mask to the origin image
#Masked Image after removing the background
plt.imshow(clearImg)
<matplotlib.image.AxesImage at 0x7f9968cbdcf0>
data_copy = images.copy()
lower_green = (25, 40, 50)
upper_green = (75, 255, 255)
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (11, 11))
preprocessed_data_color = []
for img in images:
resize_img = cv2.resize(img,None,fx=0.50,fy=0.50)
Gblur_img = cv2.GaussianBlur(resize_img,(3,3),0)
hsv_img = cv2.cvtColor(Gblur_img, cv2.COLOR_BGR2HSV)
mask = cv2.inRange(hsv_img, lower_green, upper_green)
mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel)
bMask = mask > 0
clearImg = np.zeros_like(resize_img, np.uint8) # Create empty image
clearImg[bMask] = resize_img[bMask] # Apply boolean mask to the original image
# clearImg1 = cv2.cvtColor(clearImg,cv2.COLOR_BGR2GRAY)
preprocessed_data_color.append(clearImg)
#Preprocessed all plant images
preprocessed_data_color = np.asarray(preprocessed_data_color)
#Visualizing our preprocessed color plant images
from mpl_toolkits.axes_grid1 import ImageGrid
fig = plt.figure(1, figsize=(num_categ, num_categ))
grid = ImageGrid(fig, 111, nrows_ncols=(num_categ, num_categ), axes_pad=0.05)
i = 0
index = labels.index
for category_id, category in enumerate(categ):
condition = labels["Label"] == category
plant_indices = index[condition].tolist()
for j in range(0,12):
ax = grid[i]
# img = read_img(filepath, (224, 224))
# ax.imshow(img / 255.)
ax.imshow(preprocessed_data_color[plant_indices[j]]/255.)
# ax[i].set_title(ylabels.iloc[i].to_list(),fontsize=7,rotation=45)
ax.axis('off')
if i % num_categ == num_categ - 1:
ax.text(70, 30, category, verticalalignment='center')
i += 1
plt.show();
preprocessed_data_color.shape
(4750, 64, 64, 3)
#CONVERTING IMAGES TO GRAY SCALE
preprocessed_data_gs = []
for img in preprocessed_data_color:
gi = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
preprocessed_data_gs.append(gi)
preprocessed_data_gs = np.asarray(preprocessed_data_gs)
preprocessed_data_gs.shape
(4750, 64, 64)
fig = plt.figure(1, figsize=(num_categ, num_categ))
grid = ImageGrid(fig, 111, nrows_ncols=(num_categ, num_categ), axes_pad=0.05)
i = 0
index = labels.index
for category_id, category in enumerate(categ):
condition = labels["Label"] == category
plant_indices = index[condition].tolist()
for j in range(0,12):
ax = grid[i]
# img = read_img(filepath, (224, 224))
# ax.imshow(img / 255.)
ax.imshow(preprocessed_data_gs[plant_indices[j]],cmap='gray',vmin=0, vmax=255)
# ax[i].set_title(ylabels.iloc[i].to_list(),fontsize=7,rotation=45)
ax.axis('off')
if i % num_categ == num_categ - 1:
ax.text(70, 30, category, verticalalignment='center')
i += 1
plt.show();
#Converting Grayscale to Edge images using Sobel and Laplacian
sobel = cv2.Sobel(preprocessed_data_gs[0]*255, cv2.CV_64F,1,1,ksize=3)
laplacian = cv2.Laplacian(preprocessed_data_gs[0]*255, cv2.CV_64F)
plt.imshow(sobel)
<matplotlib.image.AxesImage at 0x7f992b214580>
plt.imshow(laplacian)
<matplotlib.image.AxesImage at 0x7fa9eda91a50>
#Converting all color images to Laplacian Edge detected images
preprocessed_data_Edge_Lap = []
for img in preprocessed_data_gs:
egi = cv2.Laplacian(img*255, cv2.CV_64F)
preprocessed_data_Edge_Lap.append(egi)
preprocessed_data_Edge_Lap = np.asarray(preprocessed_data_Edge_Lap)
preprocessed_data_Edge_Lap.shape
(4750, 64, 64)
fig = plt.figure(1, figsize=(num_categ, num_categ))
grid = ImageGrid(fig, 111, nrows_ncols=(num_categ, num_categ), axes_pad=0.05)
i = 0
index = labels.index
for category_id, category in enumerate(categ):
condition = labels["Label"] == category
plant_indices = index[condition].tolist()
for j in range(0,12):
ax = grid[i]
# img = read_img(filepath, (224, 224))
# ax.imshow(img / 255.)
ax.imshow(preprocessed_data_Edge_Lap[plant_indices[j]],cmap='gray',vmin=0, vmax=255)
# ax[i].set_title(ylabels.iloc[i].to_list(),fontsize=7,rotation=45)
ax.axis('off')
if i % num_categ == num_categ - 1:
ax.text(70, 30, category, verticalalignment='center')
i += 1
plt.show();
#NORMALIZING IMAGES
preprocessed_data_gs = preprocessed_data_gs / 255.
preprocessed_data_color = preprocessed_data_color / 255.
preprocessed_data_Edge_Lap = preprocessed_data_Edge_Lap / 255.
labels['Label'] = labels['Label'].astype('category')
labels['Label'] = labels['Label'].cat.codes
labels.value_counts()
Label 6 654 3 611 8 516 10 496 5 475 1 390 11 385 2 287 0 263 9 231 4 221 7 221 dtype: int64
from tensorflow.keras.utils import to_categorical
labels = to_categorical(labels, num_classes=12)
print("Shape of y_train:", labels.shape)
print("One value of y_train:", labels[0])
Shape of y_train: (4750, 12) One value of y_train: [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0.]
from sklearn.model_selection import train_test_split
random_state = 42
val_split = 0.20
#1st split into train and test
X_train, X_test1, y_train, y_test1 = train_test_split(preprocessed_data_color, labels, test_size=0.20, stratify=labels,random_state = random_state)
#for my color image purpose and individual image pred.
X_train_color, X_test1_color, y_train_color, y_test1_color = train_test_split(images, labels, test_size=0.20, stratify=labels,random_state = random_state)
#2nd split into val and test
X_val, X_test, y_val, y_test = train_test_split(X_test1, y_test1, test_size=0.20, stratify=y_test1,random_state = random_state)
#for my color image purpose and individual image pred.
X_val_color, X_test_color, y_val_color, y_test_color = train_test_split(X_test1_color, y_test1, test_size=0.20, stratify=y_test1,random_state = random_state)
X = np.concatenate((X_train, X_test1))
y = np.concatenate((y_train, y_test1))
print("X_train shape: ", X_train.shape)
print("y_train shape: ", y_train.shape)
print("X_val shape: ", X_val.shape)
print("y_val shape: ", y_val.shape)
print("X_test shape: ", X_test.shape)
print("y_test shape: ", y_test.shape)
print("X shape: ", X.shape)
print("y shape: ", y.shape)
X_train shape: (3800, 64, 64, 3) y_train shape: (3800, 12) X_val shape: (760, 64, 64, 3) y_val shape: (760, 12) X_test shape: (190, 64, 64, 3) y_test shape: (190, 12) X shape: (4750, 64, 64, 3) y shape: (4750, 12)
X_train = X_train.reshape(X_train.shape[0], 64, 64, 3)
X_val = X_val.reshape(X_val.shape[0], 64, 64, 3)
X_test = X_test.reshape(X_test.shape[0], 64, 64, 3)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_val = X_val.astype('float32')
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(shear_range = 0.2,rotation_range=180, # randomly rotate images in the range
zoom_range = 0.1, # Randomly zoom image
width_shift_range=0.1, # randomly shift images horizontally
height_shift_range=0.1, # randomly shift images vertically
horizontal_flip=True, # randomly flip images horizontally
vertical_flip=True # randomly flip images vertically
)
#Creating data generators
random_state = 42
batch_size = 32
training_set = train_datagen.flow(X_train, y_train, seed=random_state,shuffle=True)
batch_size = 32
validation_set = train_datagen.flow(X_test, y_test, seed=random_state,shuffle=True)
import tensorflow as tf
from keras import layers
from tensorflow.keras.models import Sequential # Sequential groups a linear stack of layers into a tf.keras.Model.
from tensorflow.keras.layers import Conv2D # This layer creates a convolution kernel that is convolved with the layer input to produce a tensor of outputs.
from tensorflow.keras.layers import MaxPooling2D # Max pooling operation for 2D spatial data.
from tensorflow.keras.layers import Flatten # Flattens the input. Does not affect the batch size.
from tensorflow.keras.layers import Dense, Dropout # Dropout: Applies Dropout to the input.
from tensorflow.keras import callbacks
from tensorflow.keras.callbacks import EarlyStopping # Dense: Just your regular densely-connected NN layer.
from tensorflow.keras import optimizers
# Intializing a sequential model
model1 = Sequential()
#Layer 1
model1.add(Conv2D(32, (3, 3), input_shape = (64, 64, 3), activation = 'relu', padding = 'same'))
model1.add(layers.BatchNormalization())
model1.add(MaxPooling2D(pool_size = (2, 2),strides=2))
#Layer 2
model1.add(Conv2D(32, (3, 3), activation='relu', padding="same"))
model1.add(layers.BatchNormalization())
model1.add(MaxPooling2D(pool_size = (2, 2),strides=2))
#Layer 3
model1.add(Conv2D(32, (3, 3), activation='relu', padding="same"))
model1.add(layers.BatchNormalization())
model1.add(MaxPooling2D(pool_size = (2, 2),strides=2))
# Flattening the layer before fully connected layers
model1.add(Flatten())
# Adding a fully connected layer with 128 neurons
model1.add(layers.BatchNormalization())
model1.add(Dense(units = 128, activation = 'relu'))
model1.add(Dropout(0.2))
# The final output layer with 10 neurons to predict the categorical classifcation
model1.add(Dense(units = 12, activation = 'softmax'))
# initiate Adam optimizer
adam_opt = optimizers.Adam(learning_rate=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08)
model1.compile(optimizer = adam_opt, loss = 'categorical_crossentropy', metrics = ['accuracy'])
model1.summary()
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 64, 64, 32) 896
batch_normalization (Batch (None, 64, 64, 32) 128
Normalization)
max_pooling2d (MaxPooling2 (None, 32, 32, 32) 0
D)
conv2d_1 (Conv2D) (None, 32, 32, 32) 9248
batch_normalization_1 (Bat (None, 32, 32, 32) 128
chNormalization)
max_pooling2d_1 (MaxPoolin (None, 16, 16, 32) 0
g2D)
conv2d_2 (Conv2D) (None, 16, 16, 32) 9248
batch_normalization_2 (Bat (None, 16, 16, 32) 128
chNormalization)
max_pooling2d_2 (MaxPoolin (None, 8, 8, 32) 0
g2D)
flatten (Flatten) (None, 2048) 0
batch_normalization_3 (Bat (None, 2048) 8192
chNormalization)
dense (Dense) (None, 128) 262272
dropout (Dropout) (None, 128) 0
dense_1 (Dense) (None, 12) 1548
=================================================================
Total params: 291788 (1.11 MB)
Trainable params: 287500 (1.10 MB)
Non-trainable params: 4288 (16.75 KB)
_________________________________________________________________
#EArly stopping
callback_es = tf.keras.callbacks.EarlyStopping(monitor='val_accuracy', patience=20, min_delta=0.0001, restore_best_weights=True)
# Fit the compiled model1 to your training data
model1_history = model1.fit(
training_set,
batch_size=batch_size,
epochs=100,
validation_data=(X_val, y_val),
shuffle=True,
callbacks=[callback_es]
)
Epoch 1/100 119/119 [==============================] - 7s 62ms/step - loss: 0.9845 - accuracy: 0.6703 - val_loss: 5.7974 - val_accuracy: 0.1487 Epoch 2/100 119/119 [==============================] - 8s 64ms/step - loss: 0.8723 - accuracy: 0.7053 - val_loss: 0.9385 - val_accuracy: 0.6671 Epoch 3/100 119/119 [==============================] - 8s 64ms/step - loss: 0.7914 - accuracy: 0.7300 - val_loss: 0.7307 - val_accuracy: 0.7474 Epoch 4/100 119/119 [==============================] - 8s 65ms/step - loss: 0.7793 - accuracy: 0.7408 - val_loss: 0.6586 - val_accuracy: 0.7895 Epoch 5/100 119/119 [==============================] - 8s 68ms/step - loss: 0.6979 - accuracy: 0.7616 - val_loss: 1.5804 - val_accuracy: 0.6000 Epoch 6/100 119/119 [==============================] - 8s 68ms/step - loss: 0.6983 - accuracy: 0.7597 - val_loss: 0.8414 - val_accuracy: 0.7592 Epoch 7/100 119/119 [==============================] - 8s 69ms/step - loss: 0.6243 - accuracy: 0.7792 - val_loss: 1.6720 - val_accuracy: 0.5526 Epoch 8/100 119/119 [==============================] - 9s 72ms/step - loss: 0.6190 - accuracy: 0.7863 - val_loss: 0.6672 - val_accuracy: 0.7855 Epoch 9/100 119/119 [==============================] - 8s 68ms/step - loss: 0.5951 - accuracy: 0.8003 - val_loss: 1.5244 - val_accuracy: 0.5974 Epoch 10/100 119/119 [==============================] - 8s 68ms/step - loss: 0.5582 - accuracy: 0.8024 - val_loss: 0.8438 - val_accuracy: 0.7395 Epoch 11/100 119/119 [==============================] - 8s 69ms/step - loss: 0.5389 - accuracy: 0.8126 - val_loss: 1.2717 - val_accuracy: 0.6474 Epoch 12/100 119/119 [==============================] - 8s 70ms/step - loss: 0.5405 - accuracy: 0.8182 - val_loss: 0.6075 - val_accuracy: 0.8039 Epoch 13/100 119/119 [==============================] - 9s 73ms/step - loss: 0.5193 - accuracy: 0.8142 - val_loss: 0.8297 - val_accuracy: 0.7645 Epoch 14/100 119/119 [==============================] - 9s 74ms/step - loss: 0.4734 - accuracy: 0.8361 - val_loss: 0.7111 - val_accuracy: 0.7803 Epoch 15/100 119/119 [==============================] - 9s 73ms/step - loss: 0.4958 - accuracy: 0.8242 - val_loss: 0.9043 - val_accuracy: 0.7263 Epoch 16/100 119/119 [==============================] - 9s 73ms/step - loss: 0.4818 - accuracy: 0.8287 - val_loss: 0.8563 - val_accuracy: 0.7605 Epoch 17/100 119/119 [==============================] - 9s 74ms/step - loss: 0.4462 - accuracy: 0.8450 - val_loss: 0.7882 - val_accuracy: 0.7645 Epoch 18/100 119/119 [==============================] - 9s 73ms/step - loss: 0.4464 - accuracy: 0.8308 - val_loss: 0.9442 - val_accuracy: 0.7566 Epoch 19/100 119/119 [==============================] - 9s 75ms/step - loss: 0.4485 - accuracy: 0.8371 - val_loss: 0.9220 - val_accuracy: 0.7224 Epoch 20/100 119/119 [==============================] - 9s 74ms/step - loss: 0.4484 - accuracy: 0.8432 - val_loss: 0.6072 - val_accuracy: 0.8316 Epoch 21/100 119/119 [==============================] - 9s 75ms/step - loss: 0.4008 - accuracy: 0.8526 - val_loss: 0.9004 - val_accuracy: 0.7382 Epoch 22/100 119/119 [==============================] - 9s 77ms/step - loss: 0.4233 - accuracy: 0.8529 - val_loss: 0.8203 - val_accuracy: 0.7855 Epoch 23/100 119/119 [==============================] - 9s 78ms/step - loss: 0.4179 - accuracy: 0.8529 - val_loss: 0.6135 - val_accuracy: 0.8092 Epoch 24/100 119/119 [==============================] - 9s 77ms/step - loss: 0.4039 - accuracy: 0.8526 - val_loss: 0.4857 - val_accuracy: 0.8355 Epoch 25/100 119/119 [==============================] - 9s 78ms/step - loss: 0.3655 - accuracy: 0.8611 - val_loss: 0.8682 - val_accuracy: 0.7737 Epoch 26/100 119/119 [==============================] - 9s 77ms/step - loss: 0.3940 - accuracy: 0.8621 - val_loss: 0.9714 - val_accuracy: 0.7500 Epoch 27/100 119/119 [==============================] - 9s 78ms/step - loss: 0.4007 - accuracy: 0.8550 - val_loss: 0.7721 - val_accuracy: 0.7711 Epoch 28/100 119/119 [==============================] - 10s 80ms/step - loss: 0.3872 - accuracy: 0.8589 - val_loss: 0.3985 - val_accuracy: 0.8711 Epoch 29/100 119/119 [==============================] - 9s 77ms/step - loss: 0.3721 - accuracy: 0.8658 - val_loss: 0.5949 - val_accuracy: 0.8132 Epoch 30/100 119/119 [==============================] - 9s 80ms/step - loss: 0.3656 - accuracy: 0.8666 - val_loss: 2.2798 - val_accuracy: 0.5592 Epoch 31/100 119/119 [==============================] - 9s 77ms/step - loss: 0.3662 - accuracy: 0.8650 - val_loss: 0.6701 - val_accuracy: 0.8092 Epoch 32/100 119/119 [==============================] - 9s 78ms/step - loss: 0.3652 - accuracy: 0.8689 - val_loss: 0.4743 - val_accuracy: 0.8461 Epoch 33/100 119/119 [==============================] - 9s 77ms/step - loss: 0.3450 - accuracy: 0.8742 - val_loss: 0.4682 - val_accuracy: 0.8526 Epoch 34/100 119/119 [==============================] - 9s 78ms/step - loss: 0.3561 - accuracy: 0.8747 - val_loss: 1.4843 - val_accuracy: 0.6461 Epoch 35/100 119/119 [==============================] - 10s 84ms/step - loss: 0.3618 - accuracy: 0.8700 - val_loss: 0.5839 - val_accuracy: 0.8342 Epoch 36/100 119/119 [==============================] - 10s 82ms/step - loss: 0.3387 - accuracy: 0.8739 - val_loss: 1.3782 - val_accuracy: 0.6645 Epoch 37/100 119/119 [==============================] - 10s 81ms/step - loss: 0.3314 - accuracy: 0.8739 - val_loss: 0.4611 - val_accuracy: 0.8316 Epoch 38/100 119/119 [==============================] - 10s 82ms/step - loss: 0.3379 - accuracy: 0.8766 - val_loss: 1.1657 - val_accuracy: 0.7513 Epoch 39/100 119/119 [==============================] - 9s 78ms/step - loss: 0.3108 - accuracy: 0.8855 - val_loss: 0.6324 - val_accuracy: 0.8263 Epoch 40/100 119/119 [==============================] - 10s 81ms/step - loss: 0.3227 - accuracy: 0.8800 - val_loss: 0.8358 - val_accuracy: 0.7382 Epoch 41/100 119/119 [==============================] - 10s 81ms/step - loss: 0.3223 - accuracy: 0.8821 - val_loss: 1.7880 - val_accuracy: 0.6776 Epoch 42/100 119/119 [==============================] - 10s 82ms/step - loss: 0.3491 - accuracy: 0.8771 - val_loss: 0.8700 - val_accuracy: 0.7684 Epoch 43/100 119/119 [==============================] - 10s 82ms/step - loss: 0.3399 - accuracy: 0.8753 - val_loss: 1.6086 - val_accuracy: 0.6737 Epoch 44/100 119/119 [==============================] - 9s 79ms/step - loss: 0.3254 - accuracy: 0.8837 - val_loss: 0.5955 - val_accuracy: 0.8105 Epoch 45/100 119/119 [==============================] - 10s 81ms/step - loss: 0.3082 - accuracy: 0.8858 - val_loss: 0.5799 - val_accuracy: 0.8197 Epoch 46/100 119/119 [==============================] - 10s 84ms/step - loss: 0.3161 - accuracy: 0.8811 - val_loss: 0.4991 - val_accuracy: 0.8526 Epoch 47/100 119/119 [==============================] - 10s 85ms/step - loss: 0.3128 - accuracy: 0.8839 - val_loss: 1.1148 - val_accuracy: 0.7566 Epoch 48/100 119/119 [==============================] - 10s 82ms/step - loss: 0.2960 - accuracy: 0.8905 - val_loss: 0.4808 - val_accuracy: 0.8474
Training Loss: The training loss, denoted as "loss: 0.2960," is a measure of how well the CNN is learning to make predictions. In this case, the loss value is 0.2960, which is relatively low. A lower loss indicates that the model's predictions closely match the actual target values in the training data.
Training Accuracy: The training accuracy, noted as "accuracy: 0.8905," tells us what proportion of the training data the model correctly classified. In this case, the model achieved an accuracy of 89.05%, which is quite high. This means that it made accurate predictions for approximately 89.05% of the training examples.
Validation Loss: Moving on to the validation results, "val_loss: 0.4808" represents the loss on a separate validation dataset. This dataset contains data that the model hasn't seen during training. The validation loss here is 0.4808, indicating that the model's performance on unseen data is also relatively good, although it's slightly higher than the training loss.
Validation Accuracy: Lastly, "val_accuracy: 0.8474" signifies the accuracy achieved on the validation dataset. With a validation accuracy of 84.74%, the model demonstrates its ability to generalize well to new, previously unseen data.
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout
from tensorflow.keras.models import Sequential
from tensorflow.keras import layers, regularizers
# Initializing a sequential model
model2 = Sequential()
# Layer 1 with L2 regularization
model2.add(Conv2D(32, (3, 3), input_shape=(64, 64, 3), activation='relu', padding='same', kernel_regularizer=regularizers.l2(0.01)))
model2.add(layers.BatchNormalization())
model2.add(MaxPooling2D(pool_size=(2, 2), strides=2))
# Layer 2 with L2 regularization
model2.add(Conv2D(32, (3, 3), activation='relu', padding="same", kernel_regularizer=regularizers.l2(0.01)))
model2.add(layers.BatchNormalization())
model2.add(MaxPooling2D(pool_size=(2, 2), strides=2))
# Layer 3 with L2 regularization
model2.add(Conv2D(32, (3, 3), activation='relu', padding="same", kernel_regularizer=regularizers.l2(0.01)))
model2.add(layers.BatchNormalization())
model2.add(MaxPooling2D(pool_size=(2, 2), strides=2))
# Flattening the layer before fully connected layers
model2.add(Flatten())
# Adding a fully connected layer with 128 neurons and L2 regularization
model2.add(layers.BatchNormalization())
model2.add(Dense(units=128, activation='relu', kernel_regularizer=regularizers.l2(0.01)))
model2.add(Dropout(0.2))
# The final output layer with 12 neurons to predict the categorical classification
model2.add(Dense(units=12, activation='softmax'))
# Compile the model
model2.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
# initiate Adam optimizer
adam_opt = optimizers.Adam(learning_rate=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08)
model2.compile(optimizer = adam_opt, loss = 'categorical_crossentropy', metrics = ['accuracy'])
model2.summary()
Model: "sequential_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_6 (Conv2D) (None, 64, 64, 32) 896
batch_normalization_8 (Bat (None, 64, 64, 32) 128
chNormalization)
max_pooling2d_6 (MaxPoolin (None, 32, 32, 32) 0
g2D)
conv2d_7 (Conv2D) (None, 32, 32, 32) 9248
batch_normalization_9 (Bat (None, 32, 32, 32) 128
chNormalization)
max_pooling2d_7 (MaxPoolin (None, 16, 16, 32) 0
g2D)
conv2d_8 (Conv2D) (None, 16, 16, 32) 9248
batch_normalization_10 (Ba (None, 16, 16, 32) 128
tchNormalization)
max_pooling2d_8 (MaxPoolin (None, 8, 8, 32) 0
g2D)
flatten_2 (Flatten) (None, 2048) 0
batch_normalization_11 (Ba (None, 2048) 8192
tchNormalization)
dense_4 (Dense) (None, 128) 262272
dropout_2 (Dropout) (None, 128) 0
dense_5 (Dense) (None, 12) 1548
=================================================================
Total params: 291788 (1.11 MB)
Trainable params: 287500 (1.10 MB)
Non-trainable params: 4288 (16.75 KB)
_________________________________________________________________
# Fit the compiled model1 to your training data
model2_history = model2.fit(
training_set,
batch_size=batch_size,
epochs=100,
validation_data=(X_val, y_val),
shuffle=True,
callbacks=[callback_es]
)
Epoch 1/100 119/119 [==============================] - 11s 87ms/step - loss: 4.7404 - accuracy: 0.4016 - val_loss: 10.7544 - val_accuracy: 0.0605 Epoch 2/100 119/119 [==============================] - 10s 86ms/step - loss: 3.4912 - accuracy: 0.5753 - val_loss: 15.5555 - val_accuracy: 0.0605 Epoch 3/100 119/119 [==============================] - 10s 83ms/step - loss: 2.7808 - accuracy: 0.6437 - val_loss: 14.3540 - val_accuracy: 0.0605 Epoch 4/100 119/119 [==============================] - 10s 84ms/step - loss: 2.2408 - accuracy: 0.6958 - val_loss: 11.1274 - val_accuracy: 0.0618 Epoch 5/100 119/119 [==============================] - 10s 82ms/step - loss: 1.9199 - accuracy: 0.7063 - val_loss: 3.5010 - val_accuracy: 0.2895 Epoch 6/100 119/119 [==============================] - 10s 82ms/step - loss: 1.6635 - accuracy: 0.7395 - val_loss: 3.8334 - val_accuracy: 0.2895 Epoch 7/100 119/119 [==============================] - 10s 81ms/step - loss: 1.5137 - accuracy: 0.7553 - val_loss: 2.4210 - val_accuracy: 0.5289 Epoch 8/100 119/119 [==============================] - 10s 84ms/step - loss: 1.3953 - accuracy: 0.7626 - val_loss: 1.7405 - val_accuracy: 0.6605 Epoch 9/100 119/119 [==============================] - 10s 85ms/step - loss: 1.3080 - accuracy: 0.7708 - val_loss: 1.6970 - val_accuracy: 0.5974 Epoch 10/100 119/119 [==============================] - 10s 84ms/step - loss: 1.2240 - accuracy: 0.7739 - val_loss: 1.7210 - val_accuracy: 0.6421 Epoch 11/100 119/119 [==============================] - 10s 85ms/step - loss: 1.1794 - accuracy: 0.7845 - val_loss: 1.8140 - val_accuracy: 0.6368 Epoch 12/100 119/119 [==============================] - 10s 86ms/step - loss: 1.1728 - accuracy: 0.7832 - val_loss: 1.4460 - val_accuracy: 0.7224 Epoch 13/100 119/119 [==============================] - 10s 84ms/step - loss: 1.1706 - accuracy: 0.7842 - val_loss: 1.4543 - val_accuracy: 0.6868 Epoch 14/100 119/119 [==============================] - 10s 85ms/step - loss: 1.1221 - accuracy: 0.7921 - val_loss: 1.5919 - val_accuracy: 0.6500 Epoch 15/100 119/119 [==============================] - 10s 87ms/step - loss: 1.0678 - accuracy: 0.8024 - val_loss: 1.4568 - val_accuracy: 0.6803 Epoch 16/100 119/119 [==============================] - 10s 86ms/step - loss: 1.0451 - accuracy: 0.8074 - val_loss: 1.4927 - val_accuracy: 0.6316 Epoch 17/100 119/119 [==============================] - 10s 85ms/step - loss: 1.0262 - accuracy: 0.8132 - val_loss: 1.5192 - val_accuracy: 0.6842 Epoch 18/100 119/119 [==============================] - 10s 87ms/step - loss: 1.0231 - accuracy: 0.8097 - val_loss: 1.2717 - val_accuracy: 0.7408 Epoch 19/100 119/119 [==============================] - 10s 81ms/step - loss: 1.0367 - accuracy: 0.8045 - val_loss: 1.5636 - val_accuracy: 0.7013 Epoch 20/100 119/119 [==============================] - 10s 80ms/step - loss: 1.0146 - accuracy: 0.8176 - val_loss: 1.2293 - val_accuracy: 0.7368 Epoch 21/100 119/119 [==============================] - 10s 82ms/step - loss: 0.9864 - accuracy: 0.8187 - val_loss: 1.5116 - val_accuracy: 0.6908 Epoch 22/100 119/119 [==============================] - 10s 82ms/step - loss: 0.9953 - accuracy: 0.8232 - val_loss: 0.9927 - val_accuracy: 0.8329 Epoch 23/100 119/119 [==============================] - 10s 87ms/step - loss: 1.0131 - accuracy: 0.8145 - val_loss: 2.2017 - val_accuracy: 0.5382 Epoch 24/100 119/119 [==============================] - 10s 87ms/step - loss: 0.9733 - accuracy: 0.8258 - val_loss: 0.8997 - val_accuracy: 0.8355 Epoch 25/100 119/119 [==============================] - 10s 84ms/step - loss: 0.9846 - accuracy: 0.8168 - val_loss: 1.5127 - val_accuracy: 0.6961 Epoch 26/100 119/119 [==============================] - 10s 87ms/step - loss: 1.0100 - accuracy: 0.8126 - val_loss: 1.3453 - val_accuracy: 0.6882 Epoch 27/100 119/119 [==============================] - 11s 88ms/step - loss: 0.9877 - accuracy: 0.8192 - val_loss: 1.2285 - val_accuracy: 0.6855 Epoch 28/100 119/119 [==============================] - 11s 89ms/step - loss: 0.9239 - accuracy: 0.8329 - val_loss: 1.3245 - val_accuracy: 0.6961 Epoch 29/100 119/119 [==============================] - 10s 88ms/step - loss: 0.9082 - accuracy: 0.8376 - val_loss: 3.8040 - val_accuracy: 0.4461 Epoch 30/100 119/119 [==============================] - 10s 87ms/step - loss: 0.9712 - accuracy: 0.8274 - val_loss: 1.2105 - val_accuracy: 0.7474 Epoch 31/100 119/119 [==============================] - 11s 89ms/step - loss: 0.9736 - accuracy: 0.8255 - val_loss: 1.9182 - val_accuracy: 0.5908 Epoch 32/100 119/119 [==============================] - 10s 87ms/step - loss: 0.9711 - accuracy: 0.8361 - val_loss: 1.6938 - val_accuracy: 0.6513 Epoch 33/100 119/119 [==============================] - 11s 90ms/step - loss: 0.8937 - accuracy: 0.8453 - val_loss: 1.1029 - val_accuracy: 0.7803 Epoch 34/100 119/119 [==============================] - 11s 94ms/step - loss: 0.9338 - accuracy: 0.8334 - val_loss: 1.0038 - val_accuracy: 0.8066 Epoch 35/100 119/119 [==============================] - 11s 90ms/step - loss: 0.9232 - accuracy: 0.8358 - val_loss: 1.8710 - val_accuracy: 0.5658 Epoch 36/100 119/119 [==============================] - 10s 86ms/step - loss: 0.8830 - accuracy: 0.8474 - val_loss: 1.3772 - val_accuracy: 0.6868 Epoch 37/100 119/119 [==============================] - 10s 86ms/step - loss: 0.8999 - accuracy: 0.8405 - val_loss: 2.0077 - val_accuracy: 0.5829 Epoch 38/100 119/119 [==============================] - 10s 85ms/step - loss: 0.9128 - accuracy: 0.8408 - val_loss: 1.3446 - val_accuracy: 0.6697 Epoch 39/100 119/119 [==============================] - 11s 89ms/step - loss: 0.8875 - accuracy: 0.8416 - val_loss: 1.2017 - val_accuracy: 0.7461 Epoch 40/100 119/119 [==============================] - 11s 94ms/step - loss: 0.9029 - accuracy: 0.8424 - val_loss: 1.0861 - val_accuracy: 0.8013 Epoch 41/100 119/119 [==============================] - 11s 90ms/step - loss: 0.9045 - accuracy: 0.8453 - val_loss: 1.0153 - val_accuracy: 0.8237 Epoch 42/100 119/119 [==============================] - 11s 93ms/step - loss: 0.9249 - accuracy: 0.8361 - val_loss: 2.7466 - val_accuracy: 0.5329 Epoch 43/100 119/119 [==============================] - 11s 90ms/step - loss: 0.9154 - accuracy: 0.8482 - val_loss: 1.0288 - val_accuracy: 0.8184 Epoch 44/100 119/119 [==============================] - 11s 91ms/step - loss: 0.8632 - accuracy: 0.8503 - val_loss: 1.1140 - val_accuracy: 0.7618
Training Loss: The training loss increased from 0.2960 to 0.8632. This is expected when applying L2 regularization because it adds a penalty term to the loss function to discourage large weight values. As a result, the model tries to find a balance between minimizing the data-driven loss and minimizing the regularization term. The higher training loss suggests that the model's weights are being constrained by the regularization term.
Training Accuracy: The training accuracy decreased slightly from 0.8905 to 0.8503. This is also a common effect of regularization. The model might not fit the training data as closely as before, but it generalizes better to unseen data.
Validation Loss: The validation loss increased from 0.4808 to 1.1140. This indicates that the model is not overfitting the training data as much as before. It's performing worse on the validation data because it's focusing more on reducing the influence of extreme weight values (thanks to L2 regularization), which can result in slightly worse training accuracy but better generalization.
Validation Accuracy: The validation accuracy decreased from 0.8474 to 0.7618. This drop in validation accuracy can be attributed to the regularization-induced constraints on the model. While it may not perform as well on the validation set, the hope is that it will perform better on unseen data.
from keras.optimizers import Adam, RMSprop, SGD
# Original model definition
model3 = Sequential()
#Layer 1
model3.add(Conv2D(32, (3, 3), input_shape = (64, 64, 3), activation = 'relu', padding = 'same'))
model3.add(layers.BatchNormalization())
model3.add(MaxPooling2D(pool_size = (2, 2),strides=2))
#Layer 2
model3.add(Conv2D(32, (3, 3), activation='relu', padding="same"))
model3.add(layers.BatchNormalization())
model3.add(MaxPooling2D(pool_size = (2, 2),strides=2))
#Layer 3
model3.add(Conv2D(32, (3, 3), activation='relu', padding="same"))
model3.add(layers.BatchNormalization())
model3.add(MaxPooling2D(pool_size = (2, 2),strides=2))
# Flattening the layer before fully connected layers
model3.add(Flatten())
# Adding a fully connected layer with 128 neurons
model3.add(layers.BatchNormalization())
model3.add(Dense(units = 128, activation = 'relu'))
model1.add(Dropout(0.2))
# The final output layer with 10 neurons to predict the categorical classifcation
model3.add(Dense(units = 12, activation = 'softmax'))
# Compile the model with different optimizers
# Adam optimizer
adam_opt = Adam(learning_rate=0.001) # You can adjust the learning rate
model3.compile(optimizer=adam_opt, loss='categorical_crossentropy', metrics=['accuracy'])
model3.fit(training_set, validation_data=(X_val, y_val), epochs=50)
# RMSprop optimizer
rmsprop_opt = RMSprop(learning_rate=0.001) # You can adjust the learning rate
model3.compile(optimizer=rmsprop_opt, loss='categorical_crossentropy', metrics=['accuracy'])
model3.fit(training_set, validation_data=(X_val, y_val), epochs=50)
# SGD optimizer
sgd_opt = SGD(learning_rate=0.01, momentum=0.9) # You can adjust the learning rate and momentum
model3.compile(optimizer=sgd_opt, loss='categorical_crossentropy', metrics=['accuracy'])
model3.fit(training_set, validation_data=(X_val, y_val), epochs=50)
Epoch 1/50 119/119 [==============================] - 12s 89ms/step - loss: 1.8095 - accuracy: 0.4289 - val_loss: 7.8318 - val_accuracy: 0.0605 Epoch 2/50 119/119 [==============================] - 11s 88ms/step - loss: 1.3025 - accuracy: 0.5803 - val_loss: 12.7077 - val_accuracy: 0.0605 Epoch 3/50 119/119 [==============================] - 11s 89ms/step - loss: 1.0731 - accuracy: 0.6353 - val_loss: 12.4137 - val_accuracy: 0.0605 Epoch 4/50 119/119 [==============================] - 11s 89ms/step - loss: 0.9264 - accuracy: 0.6900 - val_loss: 8.9081 - val_accuracy: 0.1355 Epoch 5/50 119/119 [==============================] - 11s 91ms/step - loss: 0.8211 - accuracy: 0.7237 - val_loss: 2.8311 - val_accuracy: 0.3382 Epoch 6/50 119/119 [==============================] - 11s 92ms/step - loss: 0.7298 - accuracy: 0.7453 - val_loss: 1.0870 - val_accuracy: 0.6263 Epoch 7/50 119/119 [==============================] - 11s 91ms/step - loss: 0.6902 - accuracy: 0.7584 - val_loss: 0.8665 - val_accuracy: 0.7566 Epoch 8/50 119/119 [==============================] - 11s 92ms/step - loss: 0.6726 - accuracy: 0.7668 - val_loss: 1.1735 - val_accuracy: 0.6474 Epoch 9/50 119/119 [==============================] - 11s 90ms/step - loss: 0.6300 - accuracy: 0.7861 - val_loss: 0.7304 - val_accuracy: 0.7605 Epoch 10/50 119/119 [==============================] - 11s 90ms/step - loss: 0.6166 - accuracy: 0.7850 - val_loss: 1.1073 - val_accuracy: 0.6934 Epoch 11/50 119/119 [==============================] - 11s 89ms/step - loss: 0.5583 - accuracy: 0.8047 - val_loss: 1.7276 - val_accuracy: 0.5592 Epoch 12/50 119/119 [==============================] - 10s 87ms/step - loss: 0.5294 - accuracy: 0.8126 - val_loss: 0.6225 - val_accuracy: 0.7987 Epoch 13/50 119/119 [==============================] - 11s 88ms/step - loss: 0.5047 - accuracy: 0.8255 - val_loss: 3.6152 - val_accuracy: 0.3500 Epoch 14/50 119/119 [==============================] - 11s 91ms/step - loss: 0.4968 - accuracy: 0.8197 - val_loss: 0.9013 - val_accuracy: 0.7039 Epoch 15/50 119/119 [==============================] - 11s 90ms/step - loss: 0.4780 - accuracy: 0.8255 - val_loss: 1.0726 - val_accuracy: 0.6908 Epoch 16/50 119/119 [==============================] - 11s 92ms/step - loss: 0.4644 - accuracy: 0.8284 - val_loss: 1.0888 - val_accuracy: 0.7171 Epoch 17/50 119/119 [==============================] - 10s 86ms/step - loss: 0.4408 - accuracy: 0.8400 - val_loss: 0.6818 - val_accuracy: 0.7987 Epoch 18/50 119/119 [==============================] - 10s 88ms/step - loss: 0.4321 - accuracy: 0.8426 - val_loss: 0.6015 - val_accuracy: 0.8092 Epoch 19/50 119/119 [==============================] - 10s 86ms/step - loss: 0.4191 - accuracy: 0.8503 - val_loss: 1.3302 - val_accuracy: 0.6763 Epoch 20/50 119/119 [==============================] - 10s 88ms/step - loss: 0.4114 - accuracy: 0.8503 - val_loss: 1.0408 - val_accuracy: 0.7487 Epoch 21/50 119/119 [==============================] - 11s 90ms/step - loss: 0.3994 - accuracy: 0.8555 - val_loss: 0.8209 - val_accuracy: 0.7737 Epoch 22/50 119/119 [==============================] - 11s 90ms/step - loss: 0.3936 - accuracy: 0.8553 - val_loss: 1.8111 - val_accuracy: 0.6500 Epoch 23/50 119/119 [==============================] - 10s 85ms/step - loss: 0.3844 - accuracy: 0.8568 - val_loss: 0.5733 - val_accuracy: 0.8053 Epoch 24/50 119/119 [==============================] - 10s 86ms/step - loss: 0.3907 - accuracy: 0.8597 - val_loss: 0.8515 - val_accuracy: 0.7184 Epoch 25/50 119/119 [==============================] - 10s 84ms/step - loss: 0.3786 - accuracy: 0.8574 - val_loss: 1.0763 - val_accuracy: 0.7013 Epoch 26/50 119/119 [==============================] - 10s 87ms/step - loss: 0.3530 - accuracy: 0.8674 - val_loss: 0.4646 - val_accuracy: 0.8526 Epoch 27/50 119/119 [==============================] - 10s 85ms/step - loss: 0.3658 - accuracy: 0.8621 - val_loss: 0.5061 - val_accuracy: 0.8342 Epoch 28/50 119/119 [==============================] - 10s 85ms/step - loss: 0.3513 - accuracy: 0.8768 - val_loss: 0.5110 - val_accuracy: 0.8132 Epoch 29/50 119/119 [==============================] - 10s 86ms/step - loss: 0.3333 - accuracy: 0.8747 - val_loss: 0.8780 - val_accuracy: 0.7855 Epoch 30/50 119/119 [==============================] - 11s 89ms/step - loss: 0.3426 - accuracy: 0.8755 - val_loss: 0.6740 - val_accuracy: 0.8079 Epoch 31/50 119/119 [==============================] - 11s 90ms/step - loss: 0.3309 - accuracy: 0.8742 - val_loss: 0.8863 - val_accuracy: 0.7579 Epoch 32/50 119/119 [==============================] - 11s 92ms/step - loss: 0.3204 - accuracy: 0.8779 - val_loss: 0.7347 - val_accuracy: 0.7711 Epoch 33/50 119/119 [==============================] - 11s 90ms/step - loss: 0.2996 - accuracy: 0.8818 - val_loss: 1.5743 - val_accuracy: 0.6421 Epoch 34/50 119/119 [==============================] - 11s 90ms/step - loss: 0.2943 - accuracy: 0.8863 - val_loss: 0.8108 - val_accuracy: 0.7421 Epoch 35/50 119/119 [==============================] - 11s 91ms/step - loss: 0.3066 - accuracy: 0.8858 - val_loss: 0.4492 - val_accuracy: 0.8605 Epoch 36/50 119/119 [==============================] - 11s 89ms/step - loss: 0.3239 - accuracy: 0.8868 - val_loss: 0.8521 - val_accuracy: 0.7829 Epoch 37/50 119/119 [==============================] - 11s 92ms/step - loss: 0.3104 - accuracy: 0.8805 - val_loss: 0.6245 - val_accuracy: 0.8092 Epoch 38/50 119/119 [==============================] - 11s 91ms/step - loss: 0.2940 - accuracy: 0.8887 - val_loss: 0.6429 - val_accuracy: 0.8171 Epoch 39/50 119/119 [==============================] - 11s 90ms/step - loss: 0.2899 - accuracy: 0.8955 - val_loss: 0.6335 - val_accuracy: 0.8171 Epoch 40/50 119/119 [==============================] - 11s 90ms/step - loss: 0.2940 - accuracy: 0.8874 - val_loss: 1.8815 - val_accuracy: 0.6618 Epoch 41/50 119/119 [==============================] - 11s 91ms/step - loss: 0.2947 - accuracy: 0.8924 - val_loss: 1.0767 - val_accuracy: 0.7329 Epoch 42/50 119/119 [==============================] - 11s 90ms/step - loss: 0.2765 - accuracy: 0.8937 - val_loss: 0.4200 - val_accuracy: 0.8526 Epoch 43/50 119/119 [==============================] - 10s 86ms/step - loss: 0.2973 - accuracy: 0.8905 - val_loss: 1.0326 - val_accuracy: 0.7171 Epoch 44/50 119/119 [==============================] - 10s 85ms/step - loss: 0.2611 - accuracy: 0.9003 - val_loss: 0.3571 - val_accuracy: 0.8816 Epoch 45/50 119/119 [==============================] - 11s 89ms/step - loss: 0.2756 - accuracy: 0.8900 - val_loss: 0.8248 - val_accuracy: 0.7645 Epoch 46/50 119/119 [==============================] - 11s 90ms/step - loss: 0.2755 - accuracy: 0.8939 - val_loss: 0.8444 - val_accuracy: 0.7697 Epoch 47/50 119/119 [==============================] - 11s 91ms/step - loss: 0.2709 - accuracy: 0.8966 - val_loss: 0.4683 - val_accuracy: 0.8368 Epoch 48/50 119/119 [==============================] - 10s 88ms/step - loss: 0.2754 - accuracy: 0.8955 - val_loss: 0.3764 - val_accuracy: 0.8816 Epoch 49/50 119/119 [==============================] - 11s 93ms/step - loss: 0.2642 - accuracy: 0.8987 - val_loss: 0.9828 - val_accuracy: 0.7868 Epoch 50/50 119/119 [==============================] - 11s 92ms/step - loss: 0.2505 - accuracy: 0.9045 - val_loss: 0.5122 - val_accuracy: 0.8513 Epoch 1/50 119/119 [==============================] - 12s 91ms/step - loss: 0.2752 - accuracy: 0.8966 - val_loss: 1.1052 - val_accuracy: 0.7500 Epoch 2/50 119/119 [==============================] - 10s 86ms/step - loss: 0.2600 - accuracy: 0.9032 - val_loss: 2.0052 - val_accuracy: 0.6632 Epoch 3/50 119/119 [==============================] - 11s 89ms/step - loss: 0.2559 - accuracy: 0.9032 - val_loss: 2.3483 - val_accuracy: 0.5382 Epoch 4/50 119/119 [==============================] - 11s 88ms/step - loss: 0.2598 - accuracy: 0.9013 - val_loss: 0.4266 - val_accuracy: 0.8737 Epoch 5/50 119/119 [==============================] - 11s 90ms/step - loss: 0.2497 - accuracy: 0.9026 - val_loss: 0.8809 - val_accuracy: 0.7947 Epoch 6/50 119/119 [==============================] - 11s 90ms/step - loss: 0.2400 - accuracy: 0.9108 - val_loss: 0.5343 - val_accuracy: 0.8513 Epoch 7/50 119/119 [==============================] - 11s 90ms/step - loss: 0.2557 - accuracy: 0.9076 - val_loss: 1.5355 - val_accuracy: 0.6987 Epoch 8/50 119/119 [==============================] - 11s 94ms/step - loss: 0.2638 - accuracy: 0.9058 - val_loss: 1.2353 - val_accuracy: 0.7039 Epoch 9/50 119/119 [==============================] - 11s 88ms/step - loss: 0.2628 - accuracy: 0.9016 - val_loss: 0.4989 - val_accuracy: 0.8500 Epoch 10/50 119/119 [==============================] - 11s 89ms/step - loss: 0.2417 - accuracy: 0.9092 - val_loss: 0.4782 - val_accuracy: 0.8618 Epoch 11/50 119/119 [==============================] - 11s 93ms/step - loss: 0.2630 - accuracy: 0.9061 - val_loss: 1.2285 - val_accuracy: 0.6579 Epoch 12/50 119/119 [==============================] - 10s 88ms/step - loss: 0.2471 - accuracy: 0.9116 - val_loss: 0.5314 - val_accuracy: 0.8289 Epoch 13/50 119/119 [==============================] - 10s 84ms/step - loss: 0.2413 - accuracy: 0.9087 - val_loss: 0.3883 - val_accuracy: 0.8737 Epoch 14/50 119/119 [==============================] - 10s 88ms/step - loss: 0.2504 - accuracy: 0.9066 - val_loss: 0.7891 - val_accuracy: 0.8026 Epoch 15/50 119/119 [==============================] - 10s 87ms/step - loss: 0.2515 - accuracy: 0.9071 - val_loss: 0.3149 - val_accuracy: 0.9000 Epoch 16/50 119/119 [==============================] - 10s 83ms/step - loss: 0.2551 - accuracy: 0.9116 - val_loss: 0.6692 - val_accuracy: 0.8579 Epoch 17/50 119/119 [==============================] - 10s 85ms/step - loss: 0.2463 - accuracy: 0.9082 - val_loss: 0.3891 - val_accuracy: 0.8974 Epoch 18/50 119/119 [==============================] - 10s 88ms/step - loss: 0.2321 - accuracy: 0.9137 - val_loss: 0.4340 - val_accuracy: 0.8355 Epoch 19/50 119/119 [==============================] - 11s 88ms/step - loss: 0.2337 - accuracy: 0.9116 - val_loss: 0.8994 - val_accuracy: 0.7855 Epoch 20/50 119/119 [==============================] - 10s 87ms/step - loss: 0.2291 - accuracy: 0.9129 - val_loss: 0.4151 - val_accuracy: 0.8395 Epoch 21/50 119/119 [==============================] - 10s 87ms/step - loss: 0.2320 - accuracy: 0.9153 - val_loss: 1.0166 - val_accuracy: 0.7421 Epoch 22/50 119/119 [==============================] - 10s 87ms/step - loss: 0.2285 - accuracy: 0.9168 - val_loss: 0.9395 - val_accuracy: 0.7842 Epoch 23/50 119/119 [==============================] - 11s 90ms/step - loss: 0.2384 - accuracy: 0.9171 - val_loss: 2.4540 - val_accuracy: 0.6184 Epoch 24/50 119/119 [==============================] - 11s 90ms/step - loss: 0.2252 - accuracy: 0.9187 - val_loss: 1.5792 - val_accuracy: 0.7092 Epoch 25/50 119/119 [==============================] - 11s 91ms/step - loss: 0.2351 - accuracy: 0.9113 - val_loss: 1.2675 - val_accuracy: 0.7276 Epoch 26/50 119/119 [==============================] - 11s 91ms/step - loss: 0.2171 - accuracy: 0.9166 - val_loss: 3.9870 - val_accuracy: 0.5526 Epoch 27/50 119/119 [==============================] - 11s 93ms/step - loss: 0.2356 - accuracy: 0.9139 - val_loss: 2.7326 - val_accuracy: 0.5539 Epoch 28/50 119/119 [==============================] - 11s 91ms/step - loss: 0.2209 - accuracy: 0.9211 - val_loss: 1.8362 - val_accuracy: 0.7197 Epoch 29/50 119/119 [==============================] - 11s 90ms/step - loss: 0.2187 - accuracy: 0.9155 - val_loss: 0.8524 - val_accuracy: 0.8316 Epoch 30/50 119/119 [==============================] - 11s 91ms/step - loss: 0.2315 - accuracy: 0.9195 - val_loss: 0.5879 - val_accuracy: 0.8171 Epoch 31/50 119/119 [==============================] - 11s 90ms/step - loss: 0.2261 - accuracy: 0.9147 - val_loss: 0.7002 - val_accuracy: 0.8342 Epoch 32/50 119/119 [==============================] - 11s 88ms/step - loss: 0.2293 - accuracy: 0.9079 - val_loss: 1.0185 - val_accuracy: 0.7842 Epoch 33/50 119/119 [==============================] - 11s 90ms/step - loss: 0.2217 - accuracy: 0.9166 - val_loss: 1.2110 - val_accuracy: 0.7250 Epoch 34/50 119/119 [==============================] - 11s 93ms/step - loss: 0.2286 - accuracy: 0.9221 - val_loss: 1.6681 - val_accuracy: 0.6724 Epoch 35/50 119/119 [==============================] - 11s 88ms/step - loss: 0.2396 - accuracy: 0.9113 - val_loss: 0.5327 - val_accuracy: 0.8724 Epoch 36/50 119/119 [==============================] - 11s 90ms/step - loss: 0.2125 - accuracy: 0.9192 - val_loss: 0.5302 - val_accuracy: 0.8539 Epoch 37/50 119/119 [==============================] - 11s 88ms/step - loss: 0.1970 - accuracy: 0.9229 - val_loss: 0.3899 - val_accuracy: 0.9105 Epoch 38/50 119/119 [==============================] - 11s 91ms/step - loss: 0.2294 - accuracy: 0.9184 - val_loss: 3.9311 - val_accuracy: 0.5868 Epoch 39/50 119/119 [==============================] - 11s 91ms/step - loss: 0.2178 - accuracy: 0.9211 - val_loss: 1.9331 - val_accuracy: 0.7316 Epoch 40/50 119/119 [==============================] - 11s 92ms/step - loss: 0.2029 - accuracy: 0.9247 - val_loss: 1.0154 - val_accuracy: 0.7776 Epoch 41/50 119/119 [==============================] - 11s 92ms/step - loss: 0.2215 - accuracy: 0.9150 - val_loss: 0.5992 - val_accuracy: 0.8566 Epoch 42/50 119/119 [==============================] - 11s 92ms/step - loss: 0.2156 - accuracy: 0.9211 - val_loss: 0.4154 - val_accuracy: 0.8789 Epoch 43/50 119/119 [==============================] - 11s 91ms/step - loss: 0.2038 - accuracy: 0.9263 - val_loss: 0.8678 - val_accuracy: 0.8474 Epoch 44/50 119/119 [==============================] - 11s 89ms/step - loss: 0.2142 - accuracy: 0.9245 - val_loss: 0.9055 - val_accuracy: 0.8421 Epoch 45/50 119/119 [==============================] - 10s 85ms/step - loss: 0.2395 - accuracy: 0.9097 - val_loss: 0.5263 - val_accuracy: 0.8750 Epoch 46/50 119/119 [==============================] - 11s 90ms/step - loss: 0.1969 - accuracy: 0.9276 - val_loss: 0.7177 - val_accuracy: 0.8697 Epoch 47/50 119/119 [==============================] - 11s 90ms/step - loss: 0.2170 - accuracy: 0.9195 - val_loss: 1.3517 - val_accuracy: 0.7461 Epoch 48/50 119/119 [==============================] - 11s 90ms/step - loss: 0.2122 - accuracy: 0.9197 - val_loss: 1.0184 - val_accuracy: 0.7803 Epoch 49/50 119/119 [==============================] - 11s 90ms/step - loss: 0.2104 - accuracy: 0.9258 - val_loss: 1.1770 - val_accuracy: 0.8000 Epoch 50/50 119/119 [==============================] - 11s 92ms/step - loss: 0.2186 - accuracy: 0.9179 - val_loss: 2.0094 - val_accuracy: 0.7408 Epoch 1/50 119/119 [==============================] - 11s 91ms/step - loss: 1.8933 - accuracy: 0.5679 - val_loss: 184.6571 - val_accuracy: 0.1645 Epoch 2/50 119/119 [==============================] - 11s 93ms/step - loss: 0.9310 - accuracy: 0.7053 - val_loss: 113.3356 - val_accuracy: 0.1382 Epoch 3/50 119/119 [==============================] - 11s 90ms/step - loss: 0.6909 - accuracy: 0.7739 - val_loss: 51.5713 - val_accuracy: 0.1487 Epoch 4/50 119/119 [==============================] - 11s 90ms/step - loss: 0.6367 - accuracy: 0.7834 - val_loss: 34.2442 - val_accuracy: 0.2474 Epoch 5/50 119/119 [==============================] - 11s 90ms/step - loss: 0.7129 - accuracy: 0.7697 - val_loss: 32.0015 - val_accuracy: 0.2500 Epoch 6/50 119/119 [==============================] - 10s 87ms/step - loss: 0.6044 - accuracy: 0.7958 - val_loss: 1.6103 - val_accuracy: 0.6368 Epoch 7/50 119/119 [==============================] - 10s 83ms/step - loss: 0.5392 - accuracy: 0.8189 - val_loss: 1.1910 - val_accuracy: 0.6974 Epoch 8/50 119/119 [==============================] - 10s 87ms/step - loss: 0.5098 - accuracy: 0.8234 - val_loss: 3.8461 - val_accuracy: 0.5026 Epoch 9/50 119/119 [==============================] - 10s 84ms/step - loss: 0.4776 - accuracy: 0.8303 - val_loss: 0.8265 - val_accuracy: 0.7711 Epoch 10/50 119/119 [==============================] - 10s 84ms/step - loss: 0.4803 - accuracy: 0.8332 - val_loss: 1.0495 - val_accuracy: 0.6868 Epoch 11/50 119/119 [==============================] - 10s 87ms/step - loss: 0.4216 - accuracy: 0.8500 - val_loss: 2.8054 - val_accuracy: 0.6092 Epoch 12/50 119/119 [==============================] - 10s 86ms/step - loss: 0.4329 - accuracy: 0.8484 - val_loss: 0.7360 - val_accuracy: 0.8066 Epoch 13/50 119/119 [==============================] - 10s 88ms/step - loss: 0.4195 - accuracy: 0.8542 - val_loss: 2.3554 - val_accuracy: 0.6184 Epoch 14/50 119/119 [==============================] - 10s 88ms/step - loss: 0.4114 - accuracy: 0.8584 - val_loss: 2.1937 - val_accuracy: 0.6513 Epoch 15/50 119/119 [==============================] - 10s 86ms/step - loss: 0.3864 - accuracy: 0.8603 - val_loss: 0.6216 - val_accuracy: 0.8211 Epoch 16/50 119/119 [==============================] - 10s 86ms/step - loss: 0.3854 - accuracy: 0.8613 - val_loss: 1.6882 - val_accuracy: 0.7105 Epoch 17/50 119/119 [==============================] - 10s 86ms/step - loss: 0.4807 - accuracy: 0.8311 - val_loss: 1.9553 - val_accuracy: 0.6882 Epoch 18/50 119/119 [==============================] - 10s 85ms/step - loss: 0.4125 - accuracy: 0.8555 - val_loss: 1.3014 - val_accuracy: 0.7632 Epoch 19/50 119/119 [==============================] - 10s 84ms/step - loss: 0.3862 - accuracy: 0.8663 - val_loss: 2.1793 - val_accuracy: 0.6289 Epoch 20/50 119/119 [==============================] - 10s 85ms/step - loss: 0.4284 - accuracy: 0.8537 - val_loss: 1.1873 - val_accuracy: 0.7197 Epoch 21/50 119/119 [==============================] - 10s 83ms/step - loss: 0.3813 - accuracy: 0.8637 - val_loss: 0.9872 - val_accuracy: 0.7776 Epoch 22/50 119/119 [==============================] - 10s 84ms/step - loss: 0.3811 - accuracy: 0.8705 - val_loss: 0.5628 - val_accuracy: 0.8263 Epoch 23/50 119/119 [==============================] - 10s 85ms/step - loss: 0.3892 - accuracy: 0.8611 - val_loss: 0.9000 - val_accuracy: 0.7974 Epoch 24/50 119/119 [==============================] - 10s 86ms/step - loss: 0.3543 - accuracy: 0.8666 - val_loss: 0.8563 - val_accuracy: 0.7803 Epoch 25/50 119/119 [==============================] - 10s 86ms/step - loss: 0.3520 - accuracy: 0.8666 - val_loss: 0.7206 - val_accuracy: 0.8289 Epoch 26/50 119/119 [==============================] - 10s 85ms/step - loss: 0.3505 - accuracy: 0.8763 - val_loss: 0.4676 - val_accuracy: 0.8487 Epoch 27/50 119/119 [==============================] - 10s 85ms/step - loss: 0.3116 - accuracy: 0.8884 - val_loss: 0.6053 - val_accuracy: 0.8421 Epoch 28/50 119/119 [==============================] - 10s 86ms/step - loss: 0.3313 - accuracy: 0.8818 - val_loss: 3.5828 - val_accuracy: 0.5816 Epoch 29/50 119/119 [==============================] - 10s 87ms/step - loss: 0.3296 - accuracy: 0.8795 - val_loss: 1.6638 - val_accuracy: 0.7013 Epoch 30/50 119/119 [==============================] - 11s 90ms/step - loss: 0.3296 - accuracy: 0.8834 - val_loss: 0.5956 - val_accuracy: 0.8303 Epoch 31/50 119/119 [==============================] - 10s 85ms/step - loss: 0.3398 - accuracy: 0.8758 - val_loss: 0.5784 - val_accuracy: 0.8395 Epoch 32/50 119/119 [==============================] - 11s 89ms/step - loss: 0.3195 - accuracy: 0.8847 - val_loss: 0.7373 - val_accuracy: 0.7724 Epoch 33/50 119/119 [==============================] - 11s 89ms/step - loss: 0.3144 - accuracy: 0.8839 - val_loss: 1.3191 - val_accuracy: 0.6987 Epoch 34/50 119/119 [==============================] - 10s 86ms/step - loss: 0.3317 - accuracy: 0.8784 - val_loss: 0.5116 - val_accuracy: 0.8592 Epoch 35/50 119/119 [==============================] - 10s 84ms/step - loss: 0.3186 - accuracy: 0.8818 - val_loss: 1.4503 - val_accuracy: 0.7342 Epoch 36/50 119/119 [==============================] - 10s 83ms/step - loss: 0.3067 - accuracy: 0.8871 - val_loss: 2.0687 - val_accuracy: 0.7289 Epoch 37/50 119/119 [==============================] - 10s 84ms/step - loss: 0.3358 - accuracy: 0.8847 - val_loss: 1.0539 - val_accuracy: 0.7763 Epoch 38/50 119/119 [==============================] - 10s 85ms/step - loss: 0.3317 - accuracy: 0.8826 - val_loss: 0.9985 - val_accuracy: 0.7053 Epoch 39/50 119/119 [==============================] - 11s 89ms/step - loss: 0.3169 - accuracy: 0.8832 - val_loss: 0.8808 - val_accuracy: 0.8092 Epoch 40/50 119/119 [==============================] - 11s 90ms/step - loss: 0.2998 - accuracy: 0.8916 - val_loss: 3.8049 - val_accuracy: 0.6553 Epoch 41/50 119/119 [==============================] - 11s 91ms/step - loss: 0.2947 - accuracy: 0.8882 - val_loss: 0.6787 - val_accuracy: 0.8250 Epoch 42/50 119/119 [==============================] - 11s 89ms/step - loss: 0.2995 - accuracy: 0.8911 - val_loss: 0.5461 - val_accuracy: 0.8368 Epoch 43/50 119/119 [==============================] - 10s 85ms/step - loss: 0.2859 - accuracy: 0.8942 - val_loss: 1.7630 - val_accuracy: 0.7079 Epoch 44/50 119/119 [==============================] - 10s 82ms/step - loss: 0.2682 - accuracy: 0.9013 - val_loss: 0.6793 - val_accuracy: 0.8039 Epoch 45/50 119/119 [==============================] - 10s 85ms/step - loss: 0.2946 - accuracy: 0.8911 - val_loss: 0.7191 - val_accuracy: 0.8053 Epoch 46/50 119/119 [==============================] - 10s 86ms/step - loss: 0.2783 - accuracy: 0.8950 - val_loss: 0.8187 - val_accuracy: 0.7763 Epoch 47/50 119/119 [==============================] - 10s 86ms/step - loss: 0.2918 - accuracy: 0.8932 - val_loss: 0.8411 - val_accuracy: 0.7961 Epoch 48/50 119/119 [==============================] - 10s 86ms/step - loss: 0.2813 - accuracy: 0.8976 - val_loss: 1.4812 - val_accuracy: 0.7579 Epoch 49/50 119/119 [==============================] - 10s 83ms/step - loss: 0.2688 - accuracy: 0.9005 - val_loss: 0.6528 - val_accuracy: 0.8408 Epoch 50/50 119/119 [==============================] - 10s 81ms/step - loss: 0.2656 - accuracy: 0.9013 - val_loss: 0.5673 - val_accuracy: 0.8461
<keras.src.callbacks.History at 0x7f995a94a8c0>
Training Loss: The model with RMSprop achieved a lower training loss (0.2186) compared to the original model (0.2960). This suggests that RMSprop was more effective in reducing the training loss, indicating improved convergence during training.
Training Accuracy: The model with RMSprop achieved a higher training accuracy (0.9179) compared to the original model (0.8905). This indicates that RMSprop helped the model fit the training data more accurately.
Validation Loss: However, the model with RMSprop experienced a significantly higher validation loss (2.0094) compared to the original model's validation loss (0.4808). This is a critical difference and suggests that the model with RMSprop may be overfitting the training data, as the validation loss is much higher than the training loss.
Validation Accuracy: The validation accuracy of the model with RMSprop (0.7408) is lower than that of the original model (0.8474), further indicating overfitting in the RMSprop-optimized model.
Training Loss: The model with SGD optimization achieved a lower training loss (0.2656) compared to the original model (0.2960). This indicates that SGD was more effective in reducing the training loss, implying better convergence during training.
Training Accuracy: The model with SGD optimization achieved a slightly higher training accuracy (0.9013) compared to the original model (0.8905). This suggests that SGD helped the model fit the training data slightly better.
Validation Loss: The validation loss of the model with SGD optimization (0.5673) is slightly higher than that of the original model (0.4808). This indicates that the SGD-optimized model may have slightly more difficulty generalizing to the validation dataset.
Validation Accuracy: The validation accuracy of the model with SGD optimization (0.8461) is slightly lower than that of the original model (0.8474), but the difference is minimal. This suggests that both models perform similarly in terms of validation accuracy.
from sklearn.model_selection import train_test_split
random_state = 42
val_split = 0.20
#1st split into train and test
X_train, X_test1, y_train, y_test1 = train_test_split(preprocessed_data_color, labels, test_size=0.20, stratify=labels,random_state = random_state)
#for my color image purpose and individual image pred.
X_train_color, X_test1_color, y_train_color, y_test1_color = train_test_split(images, labels, test_size=0.20, stratify=labels,random_state = random_state)
#2nd split into val and test
X_val, X_test, y_val, y_test = train_test_split(X_test1, y_test1, test_size=0.20, stratify=y_test1,random_state = random_state)
#for my color image purpose and individual image pred.
X_val_color, X_test_color, y_val_color, y_test_color = train_test_split(X_test1_color, y_test1, test_size=0.20, stratify=y_test1,random_state = random_state)
X = np.concatenate((X_train, X_test1))
y = np.concatenate((y_train, y_test1))
print("X_train shape: ", X_train.shape)
print("y_train shape: ", y_train.shape)
print("X_val shape: ", X_val.shape)
print("y_val shape: ", y_val.shape)
print("X_test shape: ", X_test.shape)
print("y_test shape: ", y_test.shape)
print("X shape: ", X.shape)
print("y shape: ", y.shape)
X_train shape: (3800, 64, 64, 3) y_train shape: (3800, 12) X_val shape: (760, 64, 64, 3) y_val shape: (760, 12) X_test shape: (190, 64, 64, 3) y_test shape: (190, 12) X shape: (4750, 64, 64, 3) y shape: (4750, 12)
X_train = X_train.reshape(X_train.shape[0], 64, 64, 3)
X_val = X_val.reshape(X_val.shape[0], 64, 64, 3)
X_test = X_test.reshape(X_test.shape[0], 64, 64, 3)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_val = X_val.astype('float32')
#Adding Data Augmentation
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest'
)
#Creating data generators
random_state = 42
batch_size = 32
training_set = train_datagen.flow(X_train, y_train, seed=random_state,shuffle=True)
batch_size = 32
validation_set = train_datagen.flow(X_test, y_test, seed=random_state,shuffle=True)
# Intializing a sequential model
model4 = Sequential()
#Layer 1
model4.add(Conv2D(32, (3, 3), input_shape = (64, 64, 3), activation = 'relu', padding = 'same'))
model4.add(layers.BatchNormalization())
model4.add(MaxPooling2D(pool_size = (2, 2),strides=2))
#Layer 2
model4.add(Conv2D(32, (3, 3), activation='relu', padding="same"))
model4.add(layers.BatchNormalization())
model4.add(MaxPooling2D(pool_size = (2, 2),strides=2))
#Layer 3
model4.add(Conv2D(32, (3, 3), activation='relu', padding="same"))
model4.add(layers.BatchNormalization())
model4.add(MaxPooling2D(pool_size = (2, 2),strides=2))
#Layer 4
model4.add(Conv2D(32, (3, 3), activation='relu', padding="same"))
model4.add(layers.BatchNormalization())
model4.add(MaxPooling2D(pool_size = (2, 2),strides=2))
#Layer 5
model4.add(Conv2D(32, (3, 3), activation='relu', padding="same"))
model4.add(layers.BatchNormalization())
model4.add(MaxPooling2D(pool_size = (2, 2),strides=2))
# Flattening the layer before fully connected layers
model4.add(Flatten())
# Adding a fully connected layer with 128 neurons
model4.add(layers.BatchNormalization())
model4.add(Dense(units = 128, activation = 'relu'))
model4.add(Dropout(0.1))
# The final output layer with 10 neurons to predict the categorical classifcation
model4.add(Dense(units = 12, activation = 'softmax'))
# initiate Adam optimizer
adam_opt = optimizers.Adam(learning_rate=0.01, beta_1=0.9, beta_2=0.999, epsilon=1e-08)
model4.compile(optimizer = adam_opt, loss = 'categorical_crossentropy', metrics = ['accuracy'])
model4.summary()
Model: "sequential_13"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_37 (Conv2D) (None, 64, 64, 32) 896
batch_normalization_47 (Ba (None, 64, 64, 32) 128
tchNormalization)
max_pooling2d_35 (MaxPooli (None, 32, 32, 32) 0
ng2D)
conv2d_38 (Conv2D) (None, 32, 32, 32) 9248
batch_normalization_48 (Ba (None, 32, 32, 32) 128
tchNormalization)
max_pooling2d_36 (MaxPooli (None, 16, 16, 32) 0
ng2D)
conv2d_39 (Conv2D) (None, 16, 16, 32) 9248
batch_normalization_49 (Ba (None, 16, 16, 32) 128
tchNormalization)
max_pooling2d_37 (MaxPooli (None, 8, 8, 32) 0
ng2D)
conv2d_40 (Conv2D) (None, 8, 8, 32) 9248
batch_normalization_50 (Ba (None, 8, 8, 32) 128
tchNormalization)
max_pooling2d_38 (MaxPooli (None, 4, 4, 32) 0
ng2D)
conv2d_41 (Conv2D) (None, 4, 4, 32) 9248
batch_normalization_51 (Ba (None, 4, 4, 32) 128
tchNormalization)
max_pooling2d_39 (MaxPooli (None, 2, 2, 32) 0
ng2D)
flatten_11 (Flatten) (None, 128) 0
batch_normalization_52 (Ba (None, 128) 512
tchNormalization)
dense_22 (Dense) (None, 128) 16512
dropout_11 (Dropout) (None, 128) 0
dense_23 (Dense) (None, 12) 1548
=================================================================
Total params: 57100 (223.05 KB)
Trainable params: 56524 (220.80 KB)
Non-trainable params: 576 (2.25 KB)
_________________________________________________________________
#EArly stopping
callback_es = tf.keras.callbacks.EarlyStopping(monitor='val_accuracy', patience=20, min_delta=0.0001, restore_best_weights=True)
# Fit the compiled model1 to your training data
model4_history = model4.fit(
training_set,
batch_size=batch_size,
epochs=100,
validation_data=(X_val, y_val),
shuffle=True,
callbacks=[callback_es]
)
Epoch 1/100 119/119 [==============================] - 13s 94ms/step - loss: 1.8258 - accuracy: 0.3855 - val_loss: 5.5003 - val_accuracy: 0.0961 Epoch 2/100 119/119 [==============================] - 11s 91ms/step - loss: 1.4215 - accuracy: 0.5018 - val_loss: 8.0378 - val_accuracy: 0.0605 Epoch 3/100 119/119 [==============================] - 11s 91ms/step - loss: 1.2877 - accuracy: 0.5445 - val_loss: 10.9980 - val_accuracy: 0.0605 Epoch 4/100 119/119 [==============================] - 11s 89ms/step - loss: 1.1909 - accuracy: 0.5776 - val_loss: 3.3019 - val_accuracy: 0.3013 Epoch 5/100 119/119 [==============================] - 11s 89ms/step - loss: 1.0571 - accuracy: 0.6337 - val_loss: 4.0051 - val_accuracy: 0.2776 Epoch 6/100 119/119 [==============================] - 11s 94ms/step - loss: 0.9915 - accuracy: 0.6634 - val_loss: 5.1198 - val_accuracy: 0.1289 Epoch 7/100 119/119 [==============================] - 11s 95ms/step - loss: 0.9080 - accuracy: 0.6863 - val_loss: 2.4767 - val_accuracy: 0.3197 Epoch 8/100 119/119 [==============================] - 11s 93ms/step - loss: 0.8592 - accuracy: 0.6968 - val_loss: 1.2821 - val_accuracy: 0.5921 Epoch 9/100 119/119 [==============================] - 11s 96ms/step - loss: 0.7858 - accuracy: 0.7232 - val_loss: 1.4765 - val_accuracy: 0.6355 Epoch 10/100 119/119 [==============================] - 12s 99ms/step - loss: 0.7731 - accuracy: 0.7363 - val_loss: 0.9152 - val_accuracy: 0.7158 Epoch 11/100 119/119 [==============================] - 11s 94ms/step - loss: 0.7527 - accuracy: 0.7463 - val_loss: 1.5920 - val_accuracy: 0.4895 Epoch 12/100 119/119 [==============================] - 11s 91ms/step - loss: 0.7216 - accuracy: 0.7484 - val_loss: 0.8489 - val_accuracy: 0.7118 Epoch 13/100 119/119 [==============================] - 11s 94ms/step - loss: 0.6902 - accuracy: 0.7645 - val_loss: 2.5942 - val_accuracy: 0.4487 Epoch 14/100 119/119 [==============================] - 11s 95ms/step - loss: 0.6718 - accuracy: 0.7687 - val_loss: 1.6476 - val_accuracy: 0.5763 Epoch 15/100 119/119 [==============================] - 11s 94ms/step - loss: 0.6147 - accuracy: 0.7797 - val_loss: 0.9944 - val_accuracy: 0.7421 Epoch 16/100 119/119 [==============================] - 11s 95ms/step - loss: 0.6344 - accuracy: 0.7821 - val_loss: 1.9240 - val_accuracy: 0.5632 Epoch 17/100 119/119 [==============================] - 11s 92ms/step - loss: 0.6021 - accuracy: 0.7908 - val_loss: 0.7943 - val_accuracy: 0.7697 Epoch 18/100 119/119 [==============================] - 12s 96ms/step - loss: 0.5995 - accuracy: 0.7871 - val_loss: 1.6699 - val_accuracy: 0.5276 Epoch 19/100 119/119 [==============================] - 11s 94ms/step - loss: 0.5813 - accuracy: 0.7971 - val_loss: 1.6185 - val_accuracy: 0.5539 Epoch 20/100 119/119 [==============================] - 11s 88ms/step - loss: 0.6052 - accuracy: 0.7837 - val_loss: 1.4573 - val_accuracy: 0.6408 Epoch 21/100 119/119 [==============================] - 11s 92ms/step - loss: 0.5389 - accuracy: 0.8100 - val_loss: 1.8093 - val_accuracy: 0.5276 Epoch 22/100 119/119 [==============================] - 11s 92ms/step - loss: 0.5542 - accuracy: 0.8058 - val_loss: 0.9315 - val_accuracy: 0.7145 Epoch 23/100 119/119 [==============================] - 11s 93ms/step - loss: 0.5450 - accuracy: 0.8124 - val_loss: 1.9255 - val_accuracy: 0.5474 Epoch 24/100 119/119 [==============================] - 11s 96ms/step - loss: 0.5487 - accuracy: 0.8026 - val_loss: 1.6820 - val_accuracy: 0.6092 Epoch 25/100 119/119 [==============================] - 11s 95ms/step - loss: 0.5312 - accuracy: 0.8089 - val_loss: 0.8768 - val_accuracy: 0.7000 Epoch 26/100 119/119 [==============================] - 11s 92ms/step - loss: 0.5143 - accuracy: 0.8203 - val_loss: 1.0925 - val_accuracy: 0.6855 Epoch 27/100 119/119 [==============================] - 11s 93ms/step - loss: 0.5177 - accuracy: 0.8176 - val_loss: 1.2789 - val_accuracy: 0.6250 Epoch 28/100 119/119 [==============================] - 11s 92ms/step - loss: 0.5133 - accuracy: 0.8189 - val_loss: 1.0618 - val_accuracy: 0.6961 Epoch 29/100 119/119 [==============================] - 11s 93ms/step - loss: 0.4904 - accuracy: 0.8224 - val_loss: 4.5848 - val_accuracy: 0.3921 Epoch 30/100 119/119 [==============================] - 11s 93ms/step - loss: 0.4897 - accuracy: 0.8324 - val_loss: 0.6862 - val_accuracy: 0.7776 Epoch 31/100 119/119 [==============================] - 11s 91ms/step - loss: 0.5037 - accuracy: 0.8234 - val_loss: 0.7728 - val_accuracy: 0.7263 Epoch 32/100 119/119 [==============================] - 12s 97ms/step - loss: 0.4832 - accuracy: 0.8287 - val_loss: 1.1250 - val_accuracy: 0.7197 Epoch 33/100 119/119 [==============================] - 11s 93ms/step - loss: 0.4873 - accuracy: 0.8287 - val_loss: 1.0851 - val_accuracy: 0.6197 Epoch 34/100 119/119 [==============================] - 11s 90ms/step - loss: 0.4929 - accuracy: 0.8258 - val_loss: 2.3855 - val_accuracy: 0.4526 Epoch 35/100 119/119 [==============================] - 11s 91ms/step - loss: 0.4837 - accuracy: 0.8337 - val_loss: 0.9749 - val_accuracy: 0.7066 Epoch 36/100 119/119 [==============================] - 12s 99ms/step - loss: 0.4508 - accuracy: 0.8413 - val_loss: 0.9061 - val_accuracy: 0.7342 Epoch 37/100 119/119 [==============================] - 11s 90ms/step - loss: 0.4448 - accuracy: 0.8458 - val_loss: 0.8289 - val_accuracy: 0.7368 Epoch 38/100 119/119 [==============================] - 11s 89ms/step - loss: 0.4505 - accuracy: 0.8442 - val_loss: 0.4878 - val_accuracy: 0.8553 Epoch 39/100 119/119 [==============================] - 10s 84ms/step - loss: 0.4326 - accuracy: 0.8500 - val_loss: 0.5254 - val_accuracy: 0.8105 Epoch 40/100 119/119 [==============================] - 10s 82ms/step - loss: 0.4440 - accuracy: 0.8421 - val_loss: 0.7050 - val_accuracy: 0.7724 Epoch 41/100 119/119 [==============================] - 10s 84ms/step - loss: 0.4485 - accuracy: 0.8424 - val_loss: 1.2622 - val_accuracy: 0.6447 Epoch 42/100 119/119 [==============================] - 10s 88ms/step - loss: 0.4251 - accuracy: 0.8516 - val_loss: 10.9132 - val_accuracy: 0.3224 Epoch 43/100 119/119 [==============================] - 10s 87ms/step - loss: 0.4229 - accuracy: 0.8511 - val_loss: 1.8089 - val_accuracy: 0.5855 Epoch 44/100 119/119 [==============================] - 11s 88ms/step - loss: 0.4600 - accuracy: 0.8363 - val_loss: 1.2496 - val_accuracy: 0.6303 Epoch 45/100 119/119 [==============================] - 11s 92ms/step - loss: 0.4443 - accuracy: 0.8411 - val_loss: 1.4993 - val_accuracy: 0.6855 Epoch 46/100 119/119 [==============================] - 11s 90ms/step - loss: 0.4632 - accuracy: 0.8400 - val_loss: 1.8612 - val_accuracy: 0.5921 Epoch 47/100 119/119 [==============================] - 11s 90ms/step - loss: 0.4321 - accuracy: 0.8468 - val_loss: 0.9395 - val_accuracy: 0.6289 Epoch 48/100 119/119 [==============================] - 10s 88ms/step - loss: 0.4339 - accuracy: 0.8489 - val_loss: 0.9617 - val_accuracy: 0.6908 Epoch 49/100 119/119 [==============================] - 11s 91ms/step - loss: 0.4178 - accuracy: 0.8500 - val_loss: 0.7788 - val_accuracy: 0.7368 Epoch 50/100 119/119 [==============================] - 11s 92ms/step - loss: 0.4108 - accuracy: 0.8542 - val_loss: 0.4501 - val_accuracy: 0.8250 Epoch 51/100 119/119 [==============================] - 11s 90ms/step - loss: 0.4182 - accuracy: 0.8521 - val_loss: 1.4262 - val_accuracy: 0.6632 Epoch 52/100 119/119 [==============================] - 11s 91ms/step - loss: 0.4014 - accuracy: 0.8597 - val_loss: 1.3837 - val_accuracy: 0.7421 Epoch 53/100 119/119 [==============================] - 10s 87ms/step - loss: 0.4144 - accuracy: 0.8545 - val_loss: 1.0951 - val_accuracy: 0.6895 Epoch 54/100 119/119 [==============================] - 11s 91ms/step - loss: 0.4293 - accuracy: 0.8463 - val_loss: 0.6646 - val_accuracy: 0.7776 Epoch 55/100 119/119 [==============================] - 11s 95ms/step - loss: 0.4271 - accuracy: 0.8505 - val_loss: 4.0617 - val_accuracy: 0.3118 Epoch 56/100 119/119 [==============================] - 11s 89ms/step - loss: 0.3927 - accuracy: 0.8561 - val_loss: 0.9032 - val_accuracy: 0.7461 Epoch 57/100 119/119 [==============================] - 11s 91ms/step - loss: 0.3613 - accuracy: 0.8708 - val_loss: 0.6992 - val_accuracy: 0.7908 Epoch 58/100 119/119 [==============================] - 11s 90ms/step - loss: 0.4066 - accuracy: 0.8587 - val_loss: 0.9973 - val_accuracy: 0.6921
The provided score has a significantly higher training loss (2.4158) compared to the original model's training loss (0.2960), indicating that the model is performing much worse in terms of minimizing the loss.
The training accuracy in the provided score (13.77%) is substantially lower than the original model's training accuracy (89.05%), indicating poor performance on the training data.
The validation loss in the provided score (49120076.0000) is extremely high compared to the original model's validation loss (0.4808), suggesting that the model is not generalizing well to unseen data during validation.
The validation accuracy in the provided score (6.52%) is also significantly lower than the original model's validation accuracy (84.74%), indicating that the model is performing poorly on validation data.
# Intializing a sequential model
model5 = Sequential()
#Layer 1
model5.add(Conv2D(32, (3, 3), input_shape = (64, 64, 3), activation = 'relu', padding = 'same'))
model5.add(layers.BatchNormalization())
model5.add(MaxPooling2D(pool_size = (2, 2),strides=2))
#Layer 2
model5.add(Conv2D(32, (3, 3), activation='relu', padding="same"))
model5.add(layers.BatchNormalization())
model5.add(MaxPooling2D(pool_size = (2, 2),strides=2))
# Flattening the layer before fully connected layers
model5.add(Flatten())
# Adding a fully connected layer with 128 neurons
model5.add(layers.BatchNormalization())
model5.add(Dense(units = 128, activation = 'relu'))
model5.add(Dropout(0.2))
# The final output layer with 10 neurons to predict the categorical classifcation
model5.add(Dense(units = 12, activation = 'softmax'))
# initiate Adam optimizer
adam_opt = optimizers.Adam(learning_rate=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08)
model5.compile(optimizer = adam_opt, loss = 'categorical_crossentropy', metrics = ['accuracy'])
model5.summary()
Model: "sequential_14"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_42 (Conv2D) (None, 64, 64, 32) 896
batch_normalization_53 (Ba (None, 64, 64, 32) 128
tchNormalization)
max_pooling2d_40 (MaxPooli (None, 32, 32, 32) 0
ng2D)
conv2d_43 (Conv2D) (None, 32, 32, 32) 9248
batch_normalization_54 (Ba (None, 32, 32, 32) 128
tchNormalization)
max_pooling2d_41 (MaxPooli (None, 16, 16, 32) 0
ng2D)
flatten_12 (Flatten) (None, 8192) 0
batch_normalization_55 (Ba (None, 8192) 32768
tchNormalization)
dense_24 (Dense) (None, 128) 1048704
dropout_12 (Dropout) (None, 128) 0
dense_25 (Dense) (None, 12) 1548
=================================================================
Total params: 1093420 (4.17 MB)
Trainable params: 1076908 (4.11 MB)
Non-trainable params: 16512 (64.50 KB)
_________________________________________________________________
# Fit the compiled model1 to your training data
model5_history = model5.fit(
training_set,
batch_size=batch_size,
epochs=100,
validation_data=(X_val, y_val),
shuffle=True,
callbacks=[callback_es]
)
Epoch 1/100 119/119 [==============================] - 12s 92ms/step - loss: 3.0409 - accuracy: 0.2984 - val_loss: 10.8116 - val_accuracy: 0.0605 Epoch 2/100 119/119 [==============================] - 10s 88ms/step - loss: 2.2082 - accuracy: 0.3913 - val_loss: 11.7442 - val_accuracy: 0.0605 Epoch 3/100 119/119 [==============================] - 10s 83ms/step - loss: 1.6719 - accuracy: 0.4666 - val_loss: 9.8886 - val_accuracy: 0.0605 Epoch 4/100 119/119 [==============================] - 10s 86ms/step - loss: 1.3933 - accuracy: 0.5282 - val_loss: 6.9144 - val_accuracy: 0.1211 Epoch 5/100 119/119 [==============================] - 10s 86ms/step - loss: 1.3337 - accuracy: 0.5574 - val_loss: 4.2336 - val_accuracy: 0.1697 Epoch 6/100 119/119 [==============================] - 11s 89ms/step - loss: 1.1949 - accuracy: 0.5953 - val_loss: 2.5755 - val_accuracy: 0.3053 Epoch 7/100 119/119 [==============================] - 11s 90ms/step - loss: 1.1407 - accuracy: 0.6084 - val_loss: 2.7577 - val_accuracy: 0.3132 Epoch 8/100 119/119 [==============================] - 11s 92ms/step - loss: 1.0808 - accuracy: 0.6374 - val_loss: 1.7031 - val_accuracy: 0.5632 Epoch 9/100 119/119 [==============================] - 10s 87ms/step - loss: 1.0302 - accuracy: 0.6518 - val_loss: 1.6339 - val_accuracy: 0.5382 Epoch 10/100 119/119 [==============================] - 10s 87ms/step - loss: 1.0145 - accuracy: 0.6550 - val_loss: 0.9467 - val_accuracy: 0.6895 Epoch 11/100 119/119 [==============================] - 10s 88ms/step - loss: 0.9826 - accuracy: 0.6605 - val_loss: 3.5476 - val_accuracy: 0.4197 Epoch 12/100 119/119 [==============================] - 11s 88ms/step - loss: 0.9620 - accuracy: 0.6695 - val_loss: 0.9564 - val_accuracy: 0.6750 Epoch 13/100 119/119 [==============================] - 11s 88ms/step - loss: 0.9571 - accuracy: 0.6732 - val_loss: 1.6957 - val_accuracy: 0.5987 Epoch 14/100 119/119 [==============================] - 11s 92ms/step - loss: 0.9354 - accuracy: 0.6766 - val_loss: 1.1190 - val_accuracy: 0.6776 Epoch 15/100 119/119 [==============================] - 11s 90ms/step - loss: 0.8950 - accuracy: 0.6992 - val_loss: 1.0041 - val_accuracy: 0.7092 Epoch 16/100 119/119 [==============================] - 11s 89ms/step - loss: 0.8808 - accuracy: 0.7029 - val_loss: 1.3986 - val_accuracy: 0.5697 Epoch 17/100 119/119 [==============================] - 11s 89ms/step - loss: 0.8830 - accuracy: 0.6958 - val_loss: 1.2643 - val_accuracy: 0.6526 Epoch 18/100 119/119 [==============================] - 11s 89ms/step - loss: 0.8833 - accuracy: 0.6984 - val_loss: 1.0955 - val_accuracy: 0.6776 Epoch 19/100 119/119 [==============================] - 10s 84ms/step - loss: 0.8187 - accuracy: 0.7318 - val_loss: 1.0204 - val_accuracy: 0.7355 Epoch 20/100 119/119 [==============================] - 10s 81ms/step - loss: 0.8085 - accuracy: 0.7237 - val_loss: 2.4641 - val_accuracy: 0.5197 Epoch 21/100 119/119 [==============================] - 10s 86ms/step - loss: 0.7748 - accuracy: 0.7292 - val_loss: 1.8152 - val_accuracy: 0.6316 Epoch 22/100 119/119 [==============================] - 11s 89ms/step - loss: 0.7972 - accuracy: 0.7229 - val_loss: 2.7443 - val_accuracy: 0.5645 Epoch 23/100 119/119 [==============================] - 11s 90ms/step - loss: 0.8004 - accuracy: 0.7274 - val_loss: 0.8841 - val_accuracy: 0.7421 Epoch 24/100 119/119 [==============================] - 10s 86ms/step - loss: 0.7603 - accuracy: 0.7350 - val_loss: 0.7486 - val_accuracy: 0.7803 Epoch 25/100 119/119 [==============================] - 11s 89ms/step - loss: 0.7293 - accuracy: 0.7387 - val_loss: 1.1623 - val_accuracy: 0.6934 Epoch 26/100 119/119 [==============================] - 10s 88ms/step - loss: 0.7146 - accuracy: 0.7434 - val_loss: 3.1100 - val_accuracy: 0.4474 Epoch 27/100 119/119 [==============================] - 11s 90ms/step - loss: 0.7419 - accuracy: 0.7442 - val_loss: 0.9755 - val_accuracy: 0.7211 Epoch 28/100 119/119 [==============================] - 11s 89ms/step - loss: 0.7432 - accuracy: 0.7511 - val_loss: 3.6508 - val_accuracy: 0.5329 Epoch 29/100 119/119 [==============================] - 11s 90ms/step - loss: 0.7545 - accuracy: 0.7400 - val_loss: 2.8339 - val_accuracy: 0.4474 Epoch 30/100 119/119 [==============================] - 11s 89ms/step - loss: 0.7027 - accuracy: 0.7524 - val_loss: 1.3321 - val_accuracy: 0.5579 Epoch 31/100 119/119 [==============================] - 11s 90ms/step - loss: 0.7122 - accuracy: 0.7550 - val_loss: 2.0213 - val_accuracy: 0.6566 Epoch 32/100 119/119 [==============================] - 10s 87ms/step - loss: 0.6942 - accuracy: 0.7674 - val_loss: 0.8941 - val_accuracy: 0.7303 Epoch 33/100 119/119 [==============================] - 11s 88ms/step - loss: 0.6730 - accuracy: 0.7689 - val_loss: 1.1554 - val_accuracy: 0.6763 Epoch 34/100 119/119 [==============================] - 11s 89ms/step - loss: 0.6949 - accuracy: 0.7582 - val_loss: 1.2172 - val_accuracy: 0.6184 Epoch 35/100 119/119 [==============================] - 10s 88ms/step - loss: 0.7195 - accuracy: 0.7521 - val_loss: 1.4296 - val_accuracy: 0.7145 Epoch 36/100 119/119 [==============================] - 11s 92ms/step - loss: 0.6777 - accuracy: 0.7687 - val_loss: 3.0480 - val_accuracy: 0.4947 Epoch 37/100 119/119 [==============================] - 11s 90ms/step - loss: 0.6689 - accuracy: 0.7692 - val_loss: 2.4313 - val_accuracy: 0.5618 Epoch 38/100 119/119 [==============================] - 11s 89ms/step - loss: 0.6435 - accuracy: 0.7668 - val_loss: 3.5821 - val_accuracy: 0.3632 Epoch 39/100 119/119 [==============================] - 10s 88ms/step - loss: 0.6703 - accuracy: 0.7724 - val_loss: 1.6041 - val_accuracy: 0.6355 Epoch 40/100 119/119 [==============================] - 10s 88ms/step - loss: 0.6545 - accuracy: 0.7726 - val_loss: 1.8386 - val_accuracy: 0.6039 Epoch 41/100 119/119 [==============================] - 10s 86ms/step - loss: 0.6265 - accuracy: 0.7853 - val_loss: 1.1818 - val_accuracy: 0.7171 Epoch 42/100 119/119 [==============================] - 10s 85ms/step - loss: 0.6273 - accuracy: 0.7858 - val_loss: 1.4720 - val_accuracy: 0.6408 Epoch 43/100 119/119 [==============================] - 11s 95ms/step - loss: 0.5985 - accuracy: 0.8008 - val_loss: 1.3709 - val_accuracy: 0.7092 Epoch 44/100 119/119 [==============================] - 11s 94ms/step - loss: 0.6047 - accuracy: 0.7921 - val_loss: 1.8317 - val_accuracy: 0.6342
Training Accuracy: The original model achieved a higher training accuracy of approximately 89.05% compared to the provided model's accuracy of 79.21%. This suggests that the original model learned the training data more effectively.
Validation Accuracy: The original model also outperformed the provided model in terms of validation accuracy. The original model achieved a validation accuracy of 84.74%, whereas the provided model reached a lower validation accuracy of 63.42%. This indicates that the original model's ability to generalize to unseen data was better.
Losses: The training loss for the original model (0.2960) was lower than that of the provided model (0.6047), indicating that the original model fit the training data better. Similarly, the original model's validation loss (0.4808) was considerably lower than the provided model's validation loss (1.8317), suggesting better generalization.
In the realm of machine learning, the comparison of two convolutional neural network (CNN) models has revealed a clear victor—the original model. This champion displayed exceptional training accuracy at 89.05% and an impressive validation accuracy of 84.74%. These metrics underscore its proficiency in both learning intricate patterns within the training data and generalizing effectively to new, unseen data. In contrast, the second model lagged behind with lower training and validation accuracies of 79.21% and 63.42%, respectively, indicating its inability to capture the subtleties of the dataset.
In summary, the original CNN model emerges as the undisputed choice for predictive tasks, showcasing superior pattern recognition and generalization capabilities. Its remarkable performance serves as a testament to the artistry of machine learning, where excellence in modeling leads to outstanding results.